Checking intermediate solutions against a threshold, Linear programming
I am trying to solve a set of inequalities of the form A x ≤ b together with cx ≥ d. I know that solving the LP max qx. A x ≤ b for any positive q (nonnegative in at least one position) gives the pointwise maximal vector x satisfying A x ≤ b. However, as I am only interested in the existence of a point that additionally satisfies cx ≥ d I would like to speed up the procedure.
I have tried the following:
(I) Solve the LP A x ≤ b AND cx ≥ d for a constant optimization function (setting opt to be the zero vector)
(II) Solve max qx. A x ≤ b for positive q and check whether cx ≥ d holds.
It seems, for some reason, that (II) is faster than (I) in many cases, which I don't understand.
So I tried to run the LP "max cx. A x ≤ b" together with a callback that checks if the current objective value is already greater than d, and terminates the optimization if that is the case.
I use SPX_OBJVAL to acces this value in the callback.
However, after terminating, I cannot access the point that was used to calculate the objective function. Is there a way of doing so? Do I have to check whether a feasible point has been found in the callback before accessing SPX_OBJVAL, or is this guaranteed? Is there another, preferred, way of solving this problem?

Whether your approach makes sense or not depends on the LP algorithm. Please keep in mind that in dual simplex, the first feasible solution that is found typically is already optimal and early termination makes no sense with any objective function. (That's why I like to call dual simplex an "exteriorpoint method".:D ) With primal simplex, it might make sense, but primal simplex might be inferior to dual simplex. I don't know about barrier as I never implemented it. You could play around with the 3 algorithms.
Furthermore, I think that (II) is faster than (I) because using a zero objective function often causes dual degeneracy. It happens very often that using some objective function is faster.
Best regards,
Thomas1 
Thank you for your comment. That clears things up for me a bit.
Regarding the gurobi interface: if I use primal simplex and find in the callback that the current value of SPX_OBJVAL is already above the given threshold, how do I access the corresponding solution?
My current try is to terminate using model.terminate(), but then the I get Status=11 (Interrupted) and SolCount=0 and cannot access any values in the variables.
Regards,
Simon
0 
Maybe, there are better ways to achieve your goal, but couldn't you call terminate(), then remove the objective function and then restart primal simplex. This should terminate in 0 (1?) steps as the current basis is feasible (and optimal with zero objective).
Best regards,
Thomas0 
Unfortunately that does not seem to work as if I change the objective function the current basis seems to be gone, it starts again and needs many iterations (>20).
But there needs to be a way to extract the current basis from an interrupted optimization instance, no? If I don't change the objective function, I can see that it starts with the basis it had computed before the interrupt. I also tried to access the Vbasis attribute of the variables, but that is also not accessible after an interrupt.
Thanks again for your help and best regards,
Simon
0 
Hmm, I guess only someone who knows internals of Gurobi can give an adequate answer. I do not really expect such a behaviour.
Does this also happen if you set DualReductions (and maybe PreDual) to 0? Does this happen if you disable the presolver completely?
Best regards,
Thomas0 
If I disable Presolve completely I still observe the same behaviour. I have posted my question on stack overflow, maybe somebody there can help:
0
Please sign in to leave a comment.
Comments
6 comments