Best practice for implementing branch and cut algorithm with gurobi
I am implementing a solver for binary linear optimization problem. This optimization problem consist of a large number of necessary inequalities. Besides the inequalities that are necessary to describe the set of feasible binary solutions, other classes of other valid inequalities are known. These classes have exponential size but violated inequalities can be found efficiently.
From the gurobi documentation and various forum entries if learned the following:
- The best method for dealing with the large number of necessary constraints is to use lazy constraints.
- Additional valid inequalities can be added to the model as "user cuts" with the cbCut method.
For my problem I have found that both lazy constraints and user cuts do not work particularly well. My guess is that lazy constraints have the following two problems:
- Lazy constraints can only be added when a integer feasible solution is found. I imagine that it is computationally quite expensive to get an integer feasible solution from an LP relaxation. All this work is "wasted" when the new found integer solution is cut off afterwards by a new lazy constraint.
- Typically, many lazy constraints are added over many iterations. This results in quite a large model which slows the solver down significantly.
Regarding the user cuts, I found that the cuts are mostly ignored. I know that the cuts that are added significantly improve the LP bound (if they are used) but the LP bound that the gurobi log reports does not reflect this improvement.
The best solution I have found to address all these problems is the following:
Instead of defining a binary linear model I define a model with continuous variables and only solve the LP relaxation. Every time the LP relaxation is solved, I search for new violated inequalities (both necessary inequalities and other valid inequalities) and add them to the model (the advantage of this is that the solver does not spend any resources to find an integer feasible solution). I also remove inequalities from the model that have not been active for a few iterations (this ensures that the total number of inequalities does not become to large to slow down the solver). Finally, if no additional violated inequalities are found and if the solution is still fractional I start branching. I implement this branching manually by simply creating two copies of the model and adding the constraints x = 0 and x = 1 to the two models, respectively.
While this implementation is much faster than lazy constraints + user cuts, I feel like this is not the intended method. In particular, the manual copying of the model during the branching seems highly inefficient. Also with this implementation the very sophisticated heuristics of gurobi for finding integer solutions are not used.
To summarize my question: What is the best practice to implement a solver with gurobi for an ILP with a large number of necessary constraints and a large number of additional strong valid inequalities that can be separated efficiently?
[An example of such an optimization problem is the clique partitioning problem. It is described, for instance by Sørensen (2020).]
Please sign in to leave a comment.
Comments
0 comments