GRB.MAXIMISE identifies lower objective function values than GRB.MINIMIZE. How can this be avoided?
OngoingHello,
I have a sufficiently big optimization problem with many bilinear terms in the objective function (MIQP). The issue that I face is that when I use GRB.MINIMIZE, Gurobi only identifies a single solution say 1000 but when I use GRB.MAXIMISE, Gurobi identifies 3 solutions say 1000, 1050 and 1084. Since the latter is a maximization, it converges to 1000. However, my question is that when, clearly, solutions with 1050 and 1084 exist, then why does GRB.MINIMIZE fail to reach them?
I believed that Gurobi can handle nonconvexities due to bilinear terms. I tried varying the seed but it doesn't work. What can be done to build trust in such an optimization?

Hi Gaurav,
One way to check what is happening here is to create and solve a fixed model where the upper bound and lower bound of each variable is fixed to the value of the variable in the solution with the objective value 1084. If the fixed model is infeasible, you can inspect the infeasibility, for example, by running the computeIIS method.
Please take a look at this article for more details: How do I diagnose a suboptimal objective value returned as optimal by Gurobi?
Best regards,
Simran0 
Hi Simranjit
Thank you very much for your reply. However, if GRB.MAXIMISE is able to identify the 1050 and 1084 as suboptimal solutions then I do not understand why would the fixed model (fixed for the variable values corresponding to 1084 case) return an infeasible model.
I have another issue. I tried retrieving the variable values for suboptimal solutions of the maximisation problem using the SolNumber parameter. During the solution process Gurobi prints that it was able to find 4 solutions: 416K, 433K, 433K and 433K. However, when I set the Solution Number to 3 and retrieve the Obj Value, by using PoolObjVal, it still returns 416K instead of 433K. What could be wrong?
RegardsGaurav
0
Please sign in to leave a comment.
Comments
2 comments