I am trying to implement Benders Decomposition for a minimization problem with Callback lazy optimality cuts (because my sub-problem is always feasible for any feasible master variables). First, I prepare a master problem model (including only master problem variables). Then, I solve the master model and during "where == GRB.Callback.MIPSOL", I check the objective value at that node to get the LB and then solve a sub-problem to get the UB. If UB - LB > tol, then I apply some lazy optimality cuts to the model. The model works but it converges to a wrong optimal solution with objective value far less than the optimal objective value. It seems that initial cuts are weak and the UB goes lower than the optimal value. One way to avoid this problem is to restart the callback with external optimal cut applied. However, this implementation is slower than the straightforward implementation of Benders Decomposition. Is there a way to tell the solver to not have an optimal value less than a specific value?
Please sign in to leave a comment.