How does incremental solving work with presolve ?
AnsweredHi,
I added some constraints to a model and called model.solve(), which seems to trigger presolve.
Optimize a model with 2241 rows, 5671 columns and 339845 nonzeros
...
Presolve removed 1234 rows and 4072 columns
Presolve time: 0.04s
Presolved: 1007 rows, 1599 columns, 83284 nonzeros
I then added some new constraints (bounds) to the model. This time I call model.solve(), it seems presolve is not triggered anymore.
It just prints out this and start solving:
Optimize a model with 2241 rows, 5671 columns and 339845 nonzeros
I'm writing to ask whether the second solve() call takes advantage of the presolve run during the first solve() call.
Thanks a lot!
-
Official comment
This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?. -
Hi,
Could you provide more LOG output? Could you also provide some more information about your model and a code-snippet of how exactly you call the Model.optimize() function twice?
In general, Gurobi always tries to utilize information obtained from a previous run. When solving a MIP once, subsequently adding a constraint, then solving the MIP again, Gurobi will check whether the optimal solution of the first solve provides an initial feasible point for the second optimization. For LPs Gurobi will try to re-use the optimal basis for the second optimization run and may not print the presolve message.
Best regards,
Jaromił0 -
Hi Jaromil, thanks a lot for the response.
Yeah, so I'm trying to handle a problem that involves disjunctions of linear constraints and I'm calling Gurobi to solve each LP sub-problems.
Say I have a disjunctive constraint 1 <= x1 <= 3 \/ -2 <= x1 <= 0 along with some other linear constraints.
I first add those linear constraints to the model and in addition the following constraint:
-2 <= x1 <= 3
essentially a relaxation of that disjunction.
Then, I call
model.optimize()
Next I branch on the disjunction and add new bounds
1 <= x1 <= 3
Then I call model.optimize() again.
Finally, I try the other branch:
-2 <= x1 <= 0
and call model.optimize() again.
I realize that Gurobi now supports piecewise-linear constraints and there can be a MIP encoding of the same problem. But my program needs some more fine-grained control and does something after solving the LP at each search state. I'm wondering how much overhead this would create.
Here is the log of this repeated process of calling LPs
0 -
Hi,
Since you are changing variable bounds only, the dual basis stays feasible and Gurobi can use the information from a previous optimization run to perform a warm-start. Thus, you don't see the presolve output for every optimization because Gurobi does not have to start from scratch.
You can add a disjunctive constraint using the addGenConstrOr() function. This way, you don't have to branch on your own, but you are losing the branching control. Obviously, depending on what your program is doing in between two optimizations, you might need to control the branching step on your own. It would be best to try both approaches and find out which suits your program best.
Best regards,
Jaromił1 -
Hi Jaromil,
Thanks a lot for the info! This is very helpful!
Andrew
0
Post is closed for comments.
Comments
5 comments