Optimality and Suboptimality in Quadratic Programs
AnsweredDear all,
I have a two stage stochastic programming model which uses a quadratic utility function of the form -(1/b)*(X-b)^2 as the objective to be maximized. Here X represents the return of each scenario.
To implement this, I have declared a variable u_s and added the quadratic constraints:
for s in S: u_s <= gp.QuadExpr(-(1/b)*gp.QuadExpr((x_s-b)*(x_s-b))
The objective function is then the expectation over the utility in each scenario.
Running this model for different values of b I find that some outcomes are "optimal" while most terminate with sub-optimality (status 13).
My questions are:
- Why is this the case? How is this being solved?
- Can I control whether it becomes optimal or not?
- How can I determine the degree of sub-optimality?
Thank you in advance!
-
Hi Viviane,
Why is this the case? How is this being solved?
For convex continuous models, Gurobi uses a Barrier solver (an interior point algorithm). Sub-optimal solution are most often a result of shaky model numerics. Did you have a look at our Guidelines for Numerical Issues? This should help improve the Barrier convergence.
Can I control whether it becomes optimal or not?
If you cannot improve the numerics of your model, you could try experimenting with the NumericFocus parameter. Often, also the BarHomogeneous algorithm helps. Please note that improve the numerical properties of your model should always be prioritized over just setting parameters.
How can I determine the degree of sub-optimality?
You could print the quality of your solution to check the constraint/bound violations. In rare cases, a sub-optimal solution can be far away from the truly optimal point. This is most often the case for badly scaled models.
Best regards,
Jaromił0 -
Dear Jaromil,
thanks for your prompt response.
My assumption is that what you mean with shaky numerics in my case could be the fact that I have (continuous) variables that range between 0 and 5,000 and at the same time probabilities between 1.35e07 and 0.5. These probabilities are then multiplied with the outcome of each scenario which can be negative as much as -5e05 and positive as much as 1.1e06 (price coefficients ranging around 1.0 to 150.0).
If I understood correctly what you've shared in the links, then I could try by scaling up the probabilities and/or scaling down some of the inputs that lead to outcomes being that high? Could you confirm that this could be the right direction?
And also to confirm, the Barrier solver should not normally produce sub-optimal solutions for a SOCP as it solves for an exact, global optimal? So sub-optimality is an issue I am facing because of a badly scaled model and not otherwise not a problem of the QP/SOCP formulation and solution method.
Best Regards and thank you again,
Viviane0 -
Hi Viviane,
My assumption is that what you mean with shaky numerics in my case could be the fact that I have (continuous) variables that range between 0 and 5,000 and at the same time probabilities between 1.35e07 and 0.5. These probabilities are then multiplied with the outcome of each scenario which can be negative as much as -5e05 and positive as much as 1.1e06 (price coefficients ranging around 1.0 to 150.0).
Yes, this definitely sounds like the numerical properties of your model should be improved.
If I understood correctly what you've shared in the links, then I could try by scaling up the probabilities and/or scaling down some of the inputs that lead to outcomes being that high? Could you confirm that this could be the right direction?
Correct, it would be ideal when the coefficient ranges and bound ranges of all variables are no more than 4 orders of magnitude apart. Of course, this is often wishful thinking so 6-7 order of magnitude apart may often be just good enough.
And also to confirm, the Barrier solver should not normally produce sub-optimal solutions for a SOCP as it solves for an exact, global optimal?
The word exact is too strong here. Numerical algorithms solve for global optima up to some given tolerance. In the case of the QCP Barrier algorithm, the tolerance is controlled by the BarQCPConvTol parameter.
So sub-optimality is an issue I am facing because of a badly scaled model and not otherwise not a problem of the QP/SOCP formulation and solution method.
Yes, a badly scaled model is almost always the reason for sub-optimal terminations of the Barrier algorithm. There are also a few other factors which might play a role such as when your model has very dense constraints. This often leads to accumulation of numerical error during mathematical operations which can be significant for Barrier's convergence.
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
3 comments