Skip to main content

Infeasible solution returned as optimal after warm start and adding lazy constraint

Answered

Comments

5 comments

  • Riley Clement
    • Gurobi Staff Gurobi Staff

    Hi Felix,

    Is there any reason why that behaviour is desired?

    I wouldn't say this is desired behavior, but perhaps more a case of using the solver in a way that it is not intended to be.  Still, I can propose to our developers to consider changing the behavior.

    If you want to, you can manually store the solutions found in the first run, reset the model, then feed the solutions back in using the Start attribute.  This will trigger the callback and any invalid solutions will be discarded.

    ...
    m.optimize()

    solutions = []
    for i in range(m.SolCount):
        m.SolutionNumber = i
        solutions.append(m.Xn)
    m.reset()
    m.params.StartNumber=-1
    vars = m.getVars()
    for sol in solutions:
        m.setAttr("Start", vars, sol)

    m.optimize(my_callback)

    See SolutionNumber, StartNumber and Start for more details.

    - Riley

     

    0
  • Felix Rauh
    • First Comment
    • First Question

    Dear Riley, thanks for the reply. I was not really aware that I used the solver in an unintended way, so it would be indeed good if that can be adjusted or warned about.

    Thanks as well for the suggested workaround, that should work. I am not sure whether that performs as good as what I hoped for (due to the extra steps of saving, resetting and adding the solutions again, and because dual information is not used), but maybe the best what is currently possible.

    Thank you!

    0
  • Felix Rauh
    • First Comment
    • First Question

    Let me add another remark in case other people experience a similar behaviour:

    The problem is not only about  feasibility when you would expect infeasibility, but you also get wrong solutions in situations where you solve a problem first without and then with a callback that cuts off the previously returned solution.

    Not sure whether my way of using the solver here is very exotic, but it seems to me like an easy-to-make mistake. The workaround suggested by Riley should work in that situation as well, though.

    MWE for that situation:

    import gurobipy as gp
    from gurobipy import GRB

    def my_callback(model, where):
    if where == GRB.Callback.MIPSOL:
    model.cbLazy(model._xvars[0] + model._xvars[1] <= 1)

    # Create a new model, with x1,x2, and constraint x1 + x2 >= 0, obj. max x1 + x2
    model = gp.Model("LazyConstraintExample")
    x = model.addVars(2, vtype=GRB.BINARY, name="x")
    model.addConstr(x[0] + x[1] >= 0, "InitialConstraint")
    model.setObjective(x[0] + x[1], GRB.MAXIMIZE)

    # Enable lazy constraints
    model.Params.LazyConstraints = 1

    # Store variables for callback
    model._xvars = [x[i] for i in range(2)]

    # Optimize first without then with callback
    model.optimize() # Finds solution
    # model.reset() # Problem would not appear with reset()
    model.optimize(my_callback) # Should return other solution

    # Print returned solution and objective value
    print("Solution:", [x[i].X for i in range(2)]) # Expected: [1, 0] or [0, 1]; get [1,1]
    print("Objective value:", model.objVal) # Expected: 1; get 2
    0
  • Riley Clement
    • Gurobi Staff Gurobi Staff

    Hi Felix,

    Both of these cases are stemming from the same root cause: if a solution is found then currently the only opportunity to declare it infeasible is at that point.  If it is not cutoff by a lazy constraint, then it is added permanently to the solution pool.

    I discussed this with our dev team and there are obstacles.

    i) if we check the solutions against the callback and any violate it then the existing B&B tree might be invalid, because these solutions are used to prune nodes in the tree.  This is certainly the case when the best solution violates the lazy constraints, as in your examples.  You mentioned the dual information being discarded during a reset - it won't be valid anyway if the solution violates the lazy constraints so you need the reset.

    ii)  I'd say the number of users who are re-optimizing a model is small, the number of users using lazy constraints is small, and of these the fraction of users who want to use lazy constraints in a continued optimization call but not the first is small.  So I think we're talking about an extremely small group of users who might try what you're doing here.  Of this group my hunch is that most would not be applying lazy constraints that cut off existing solutions.  So the issue I see is that we would have to run the callback on the solutions already found, to check their validity, and this has the potential to be quite expensive without gaining anything for these users - not all callbacks will be quick to run.

    We'll see where the discussion with the dev team goes and if there's anything that can be done.

    - Riley

    0
  • Felix Rauh
    • First Comment
    • First Question

    Hi Riley,

    thank you for the insights and the discussions with the dev team, great to know that it is an ongoing process!

    I see that it is a problem if a callback changes and previous solutions are not feasible anymore.

    My understanding is now that the log output "MIP start from previous solve did not produce a new incumbent solution" shows that the current best solution is checked against the callback at least at the start of a new solve, so that could be a moment to automatically doing a reset() if that solution was optimal and previous dual information is not valid anymore anyway. 

    In any cases, thanks for the great support!

    Felix

    0

Please sign in to leave a comment.