Apparent bug using degradation in hierarchical multi objectives
AnsweredHello Gurobi team.
I'm using the Python API `gurobipy` with version 12.0.1. I'm testing a tiny model with 2 hierarchical objectives. I set the relative and absolute gaps to 1e-5 and abstol=1 for the first objective. However, when I optimize the second objective, it degrades the first by 5 units. Below is the code snippet to reproduce this situation.
Am I missing something here? Thanks in advance for your support!
import gurobipy as gp
mdl = gp.Model()
x = mdl.addVar(name="x", vtype=gp.GRB.CONTINUOUS, lb=0, ub=50)
y = mdl.addVar(name="y", vtype=gp.GRB.CONTINUOUS, lb=0, ub=50)
cons1 = mdl.addConstr(2 * x + y >= 10, name="C1")
mdl.ModelSense = gp.GRB.MINIMIZE
mdl.setObjectiveN(expr=x + y, index=0, priority=10, name='min_x_plus_y', abstol=1) # x=5, y=0 is optimal, x+y=5
mdl.setObjectiveN(expr=x, index=1, priority=9, name='min_x') # x=4, y=2 should be the optimal solution, x+y=6
mdl.setParam("MIPGap", 0.00001)
mdl.setParam("MIPGapAbs", 0.00001)
mdl.optimize()
print("x, y: ", (x.X, y.X)) # (0, 10), x+y=10
-
Hello Luiz,
Thank you for raising this concern and providing an example. What you are experiencing is not a bug but is instead a difference in how the multi-objective feature handles multi-objective MIPs vs. LPs.
The behavior you expected to see is consistent with the meaning of the ObjNAbsTol parameter for multi-objective MIPs. If you convert your multi-objective model to a MIP by adding a binary or integer variable (or by defining x or y as integer variables), you will see a final solution of (4,2). However, the behavior for LPs is different.
The Allowing Multiple-Objective Degradation section of our documentation provides a formula for how this tolerance is used to allow objective degradation in the MIP setting. Within that section, the different approach used for multi-objective LPs is also described. In the LP case, this tolerance is used to determine a set of variables to be fixed when solving for later objectives. For your model, the large tolerance results in no variables being fixed, and the behavior you observed. Setting the tolerance to a smaller value would result in y being fixed to zero in the second problem. More details can be found in the section of our documentation linked above.
0 -
The behavior you expect is how ObjNAbsTol behaves for multi-objective MIPs. However, this attribute behaves differently for multi-objective LPs. From the documentation:
Objective degradations are handled differently for multi-objective LP models. For LP models, solution quality for higher-priority objectives is maintained by fixing some variables to their values in previous optimal solutions. These fixings are decided using variable reduced costs. The value of the
ObjNAbsTol
parameter indicates the amount by which a fixed variable’s reduced cost is allowed to violate dual feasibility. The value of the related ObjNRelTol attribute is ignored.In your example, the following happens:
- Gurobi solves the model with respect to the objective function \(x + y\). The optimal solution is \((5, 0)\), as expected.
- Because ObjNAbsTol is set to 1, Gurobi fixes all variables with reduced cost greater than \(1\) to their lower bounds of \(0\). Both \(x\) and \(y\) have reduced costs less than or equal to \(1\), so neither variable is fixed. If you were instead to set ObjNAbsTol=0, \(y\) would be fixed to \(0\) because it has a reduced cost of \(0.5\).
- Gurobi then solves the model with respect to the objective function \( x \). Neither \( x \) nor \( y \) are fixed due to the choice of ObjNAbsTol=1, so the optimal solution is \((0, 10)\).
The multi-objective LP approach of fixing variables based on their reduced costs tends to be much faster than the multi-objective MIP approach. When solving multi-objective MIPs, Gurobi adds constraints that define allowable objective degradations for previous hierarchical objectives, then re-solves the model.
To mimic the behavior of multi-objective MIPs, you could add a dummy binary variable to the model to make it a MIP instead of an LP. The downside is that this could result in worse performance for non-trivial models.
0
Please sign in to leave a comment.
Comments
2 comments