Infeasible solution returned as optimal after warm start and adding lazy constraint
AnsweredHi all,
I have a problem that occurs in the following situation:
- I solve a problem that is always feasible and find a solution.
- I change some of the underlying data that is used in callbacks and the problem becomes infeasible.
- That problem is identified and mentioned in the log, however, the (now) infeasible solution is returned.
I know that it works if I do model.reset(), however, I want to profit from the warm start, since in many cases the solution stays indeed feasible after my changes, and all lazy cuts that I added in the first iteration remain valid and I want to avoid running the relatively time-consuming separation procedure again.
Is there any reason why that behaviour is desired? And in case yes, how can I catch these cases and manually identify the solution as infeasible?
Here is a MWE in Python that is similar to my situation and the corresponding log:
import gurobipy as gp
from gurobipy import GRB
def my_callback(model, where):
if where == GRB.Callback.MIPSOL:
# Add x[0] + x[1] + x[2] >= 1 to make model infeasible
model.cbLazy(model._xvars[0] + model._xvars[1] + model._xvars[2] >= 1)
# Create a new model
model = gp.Model("LazyConstraintExample")
# Add three binary variables, a constraint + arbitrary objective
x = model.addVars(3, vtype=GRB.BINARY, name="x")
model.addConstr(x[0] + x[1] + x[2] <= 0, "InitialConstraint")
model.setObjective(x[0] - x[1], GRB.MAXIMIZE)
# Enable lazy constraints
model.Params.LazyConstraints = 1
# Store variables for callback
model._xvars = [x[i] for i in range(3)]
# Optimize first without then with callback
model.optimize() # Finds solution
# model.reset() # Problem would not appear with reset()
model.optimize(my_callback) # Should find infeasibility
if model.Status == GRB.INFEASIBLE:
print("Model is correctly identified as infeasible.")
elif model.Status == GRB.OPTIMAL:
print("Found opt. solution:", [x[i].X for i in range(3)], "which is infeasible.")
# Output: Found opt. solution: [0.0, 0.0, 0.0] which is infeasible.
Log:
Set parameter LazyConstraints to value 1
Gurobi Optimizer version 12.0.0 build v12.0.0rc1 (mac64[arm] - Darwin 24.3.0 24D70)CPU model: Apple M2
Thread count: 8 physical cores, 8 logical processors, using up to 8 threadsNon-default parameters:
LazyConstraints 1Optimize a model with 1 rows, 3 columns and 3 nonzeros
Model fingerprint: 0x69da56b4
Variable types: 0 continuous, 3 integer (3 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [0e+00, 0e+00]
Found heuristic solution: objective -0.0000000
Presolve removed 1 rows and 3 columns
Presolve time: 0.00s
Presolve: All rows and columns removedExplored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
Thread count was 1 (of 8 available processors)Solution count 1: -0
No other solutions better than -0Optimal solution found (tolerance 1.00e-04)
Best objective -0.000000000000e+00, best bound -0.000000000000e+00, gap 0.0000%
Gurobi Optimizer version 12.0.0 build v12.0.0rc1 (mac64[arm] - Darwin 24.3.0 24D70)CPU model: Apple M2
Thread count: 8 physical cores, 8 logical processors, using up to 8 threadsNon-default parameters:
LazyConstraints 1Optimize a model with 1 rows, 3 columns and 3 nonzeros
Model fingerprint: 0x69da56b4
Variable types: 0 continuous, 3 integer (3 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [0e+00, 0e+00]MIP start from previous solve did not produce a new incumbent solution
Presolve removed 1 rows and 3 columns
Presolve time: 0.00s
Presolve: All rows and columns removedExplored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
Thread count was 1 (of 8 available processors)Solution count 1: -0
No other solutions better than -0Optimal solution found (tolerance 1.00e-04)
Best objective -0.000000000000e+00, best bound -0.000000000000e+00, gap 0.0000%User-callback calls 116, time in user-callback 0.00 sec
Found opt. solution: [0.0, 0.0, 0.0] which is infeasible.
-
Hi Felix,
Is there any reason why that behaviour is desired?
I wouldn't say this is desired behavior, but perhaps more a case of using the solver in a way that it is not intended to be. Still, I can propose to our developers to consider changing the behavior.
If you want to, you can manually store the solutions found in the first run, reset the model, then feed the solutions back in using the Start attribute. This will trigger the callback and any invalid solutions will be discarded....
m.optimize()
solutions = []
for i in range(m.SolCount):
m.SolutionNumber = i
solutions.append(m.Xn)
m.reset()
m.params.StartNumber=-1
vars = m.getVars()
for sol in solutions:
m.setAttr("Start", vars, sol)
m.optimize(my_callback)See SolutionNumber, StartNumber and Start for more details.
- Riley
0 -
Dear Riley, thanks for the reply. I was not really aware that I used the solver in an unintended way, so it would be indeed good if that can be adjusted or warned about.
Thanks as well for the suggested workaround, that should work. I am not sure whether that performs as good as what I hoped for (due to the extra steps of saving, resetting and adding the solutions again, and because dual information is not used), but maybe the best what is currently possible.
Thank you!
0 -
Let me add another remark in case other people experience a similar behaviour:
The problem is not only about feasibility when you would expect infeasibility, but you also get wrong solutions in situations where you solve a problem first without and then with a callback that cuts off the previously returned solution.
Not sure whether my way of using the solver here is very exotic, but it seems to me like an easy-to-make mistake. The workaround suggested by Riley should work in that situation as well, though.
MWE for that situation:
import gurobipy as gp
from gurobipy import GRB
def my_callback(model, where):
if where == GRB.Callback.MIPSOL:
model.cbLazy(model._xvars[0] + model._xvars[1] <= 1)
# Create a new model, with x1,x2, and constraint x1 + x2 >= 0, obj. max x1 + x2
model = gp.Model("LazyConstraintExample")
x = model.addVars(2, vtype=GRB.BINARY, name="x")
model.addConstr(x[0] + x[1] >= 0, "InitialConstraint")
model.setObjective(x[0] + x[1], GRB.MAXIMIZE)
# Enable lazy constraints
model.Params.LazyConstraints = 1
# Store variables for callback
model._xvars = [x[i] for i in range(2)]
# Optimize first without then with callback
model.optimize() # Finds solution
# model.reset() # Problem would not appear with reset()
model.optimize(my_callback) # Should return other solution
# Print returned solution and objective value
print("Solution:", [x[i].X for i in range(2)]) # Expected: [1, 0] or [0, 1]; get [1,1]
print("Objective value:", model.objVal) # Expected: 1; get 20 -
Hi Felix,
Both of these cases are stemming from the same root cause: if a solution is found then currently the only opportunity to declare it infeasible is at that point. If it is not cutoff by a lazy constraint, then it is added permanently to the solution pool.
I discussed this with our dev team and there are obstacles.
i) if we check the solutions against the callback and any violate it then the existing B&B tree might be invalid, because these solutions are used to prune nodes in the tree. This is certainly the case when the best solution violates the lazy constraints, as in your examples. You mentioned the dual information being discarded during a reset - it won't be valid anyway if the solution violates the lazy constraints so you need the reset.
ii) I'd say the number of users who are re-optimizing a model is small, the number of users using lazy constraints is small, and of these the fraction of users who want to use lazy constraints in a continued optimization call but not the first is small. So I think we're talking about an extremely small group of users who might try what you're doing here. Of this group my hunch is that most would not be applying lazy constraints that cut off existing solutions. So the issue I see is that we would have to run the callback on the solutions already found, to check their validity, and this has the potential to be quite expensive without gaining anything for these users - not all callbacks will be quick to run.
We'll see where the discussion with the dev team goes and if there's anything that can be done.
- Riley
0 -
Hi Riley,
thank you for the insights and the discussions with the dev team, great to know that it is an ongoing process!
I see that it is a problem if a callback changes and previous solutions are not feasible anymore.
My understanding is now that the log output "MIP start from previous solve did not produce a new incumbent solution" shows that the current best solution is checked against the callback at least at the start of a new solve, so that could be a moment to automatically doing a reset() if that solution was optimal and previous dual information is not valid anymore anyway.
In any cases, thanks for the great support!
Felix
0
Please sign in to leave a comment.
Comments
5 comments