LP Warm Start (Python)
OpenI have an LP implemented with gurobipy. First, it takes 38 simplex iterations to reach optimality. Then, I immediately invoke optimize on the same instance to confirm that the previous solution is used as a warm start. The output reports 0 simplex iterations to optimality, same objective value. So far so good.
I now delete all the constraints, update the model, and re-add the same exact constraints that were just deleted. I have the same exact LP and, in fact, the previous optimal solution is still stored in the model start attribute. I invoke optimize again, but now it takes 38 simplex iterations. I checked the model start and solution vectors - they are exactly the same. I tried using the variable start attribute, as well, but that did not work. I suppose this is expected since the documentation clearly states that it is only for MIP warm starts.
My end goal is not to delete constraints and re-add back to the model. I would like to adjust the LHS of constraints since I am adding variables between solves, but RHS is untouched. I should note I am also adding constraints between solves. Is there not a way to mimic how AMPL automatically propagates new variables to the constraints if summed on a set? Or at least combine the features of MIP and LP warm starts so they are not separate starting vectors? Why are they even separated? Even if they must be separated, if the LP start is feasible and optimal I don't understand why Gurobi ignored it.
By the way, I am using all default parameter values with the exception of LPWarmStart=2.
-
My temporary fix has been to add a single binary variable to the model, forcing Gurobi to use the variable start attributes since it is now technically a MIP. I am still hoping for a better resolution to this issue.
0 -
import gurobipy as gp
import numpy as np
import random
random.seed(2023)
env = gp.Env(empty = True)
env.setParam('OutputFlag', 0)
env.setParam('Presolve', 0)
env.setParam('LPWarmStart', 2)
env.start()
model = gp.Model(env = env)
X = np.array([model.addVar(vtype = gp.GRB.CONTINUOUS, lb = 0, ub = 10) for _ in range(10)])
x = np.array([random.randint(5, 10) for _ in range(10)])
# model.addVar(vtype = gp.GRB.BINARY) # uncomment for MIP
model.setObjective(np.sum(x * X), gp.GRB.MAXIMIZE)
model.addConstr(X[0] <= X[3] - 4)
model.addConstr(X[2] <= X[3] - 2)
model.addConstr(X[5] <= X[0] - 4)
model.addConstr(X[9] <= X[8] - 6)
model.addConstr(X[6] <= X[4] - 1)
model.optimize()
print('Iterations:', int(model.IterCount))
print('Objective:', model.getObjective().getValue())
print()
model.optimize()
print('Iterations:', int(model.IterCount))
print('Objective:', model.getObjective().getValue())
print()
model.remove(model.getConstrs())
model.update()
model.addConstr(X[0] <= X[3] - 4)
model.addConstr(X[2] <= X[3] - 2)
model.addConstr(X[5] <= X[0] - 4)
model.addConstr(X[9] <= X[8] - 6)
model.addConstr(X[6] <= X[4] - 1)
model.optimize()
print('Iterations:', int(model.IterCount))
print('Objective:', model.getObjective().getValue())
print()0 -
Hi Nicholas,
My understanding is that if you modify a model so that the basic solution remains basic, then resuming the optimization can take advantage of a warm start. If you delete a constraint that is active at the solution then I expect the solver to ignore the solution and start from scratch. The solver doesn't try and figure out if the constraint you deleted is later added back to create the original model, and from the solvers perspective these constraints are not the same object, even though they are mathematically identical.
If you wish to modify the LHS of constraint, i.e. the coefficient matrix, then using the Model.chgCoeff method would be the best approach, and more likely that the solver can take advantage of warm starting. I would suggest adding all variables you may possibly need, to the original model, even if they are not participating in any constraints.
Just a couple of questions/comments based on your first post:
I have the same exact LP and, in fact, the previous optimal solution is still stored in the model start attribute.
Can you clarify what you mean by this? The Start attribute is only valid for MIP models, as you point out later in the post.
Or at least combine the features of MIP and LP warm starts so they are not separate starting vectors? Why are they even separated?
A MIP solution is in general not a valid solution for the LP relaxation, and a solution to the LP relaxation is in general not valid solution for a MIP, so I can't see a way that they could be combined.
Hope this helps, and we'll see if anyone else has some comments, as I think there could be more to say.
- Riley
0 -
Riley Clement thanks for the response and it definitely helps. Makes sense how the solver is handling things internally.
For LP warm start with the Start attribute, I was adding a dummy binary as Maliheh Aramon mentions here. It seemed to effectively reduce the number of simplex iterations, but the MIP overhead was enough to negate any runtime improvements. I'll try interacting with the LP the way it was intended, which so far seems to be working for me.
0
Please sign in to leave a comment.
Comments
4 comments