メインコンテンツへスキップ

Relaxing constraints to see how much we can improve objective value

回答済み

コメント

3件のコメント

  • Alison Cozad
    Gurobi Staff Gurobi Staff

    This can be tricky because shadow prices are not well-defined in mixed-integer programs.  For this reason, Gurobi does not provide dual values for a model with integer variables.  Nevertheless, if your goal is to address the question of "which constraint(s) should I relax to improve the objective," several alternative approaches are noted below.

     

    Fastest, least accurate: Fixed the model to get shadow values or duals

    You can only retrieve the duals (pi values or shadow values) from a continuous model. So, you need to make the model continuous to get them. : )  Therefore, you can certainly solve the MIP model and then fix all of the integer variables. This will give you a continuous model from which you can retrieve shadow values.  You can find instructions to do this here: How do I retrieve the (dual) Pi values for a MIP problem?

    Implementing this approach is relatively straightforward and has a fast execution time. However, the drawback is that if you are seeking to identify "which constraint(s) should I relax to improve the objective," this method may yield less accurate results or potentially provide nonsensical answers.

    To illustrate this concept, consider the constraint highlighted in red below. In the fixed problem, this constraint lacks a shadow value as it is not tightly binding; there is still a gap between the solution and the constraint. Nevertheless, relaxing this constraint would still lead to an improvement in the objective.



     

    Most accurate, slower: Relax each constraint of interest one-at-a-time

    To truly know the effect of relaxing a constraint on the model, you can test it.  Begin by solving the original model. Next, relax the constraint and re-solve the model for each constraint you wish to investigate. Then compare the resulting objective values to determine the extent of improvement. By repeating this process for all relevant constraints, you can create a ranked list of constraints that "hold back the objective the most."

    This approach offers several advantages due to its accuracy. It considers whether loosening a constraint might result in hitting another constraint, and it allows for testing various degrees of constraint relaxation, such as reverting to the original project bounds or adhering to more relaxed physical limits and hardware specifications.

    You can configure this using Gurobi's Multiple Scenarios feature, enabling you to define multiple constraint right-hand-side values and solve them simultaneously.

    I suggest initially narrowing down the constraints to examine by focusing on those that can be reasonably relaxed. 

     

    One other idea: Create an infeasibility 

    Let's say you are really trying to ask ,"What constraints do I need to relax to get, say, 10% improvement on the objective?".  Then, you can intentionally create an infeasible problem by setting a bound on the objective that is 10% better than the optimal solution -- which is impossible.  Once you have this infeasible problem, you can use computeIIS() or another infeasibility analysis tool to get the list of constraints causing the infeasibility.  These are constraints that, if relaxed, would really help the objective.  For more details on infeasibiltiy analysis tools, please see How do I determine why my model is infeasible?

     

    I will put together an example of these to illustrate.

     

     

     

     

    2
  • Alison Cozad
    Gurobi Staff Gurobi Staff

    To illustrate these three methods, we will dig into the diet problem below. This is based on the diet.py example in Gurobi's example library.

    The diet problem -- MIP version

    Here is the full MIP version of the diet problem model, compatible with the LP format.

    Minimize
      2.49 hamburger + 2.89 chicken + 1.5 hot_dog
       + 1.89 fries + 2.09 macaroni + 1.99 pizza
       + 2.49 salad + 0.89 milk + 1.59 ice_cream

    Subject To
     calories: 410 hamburger + 420 chicken + 560 hot_dog
       + 380 fries + 320 macaroni + 320 pizza + 320 salad
       + 100 milk + 330 ice_cream <= 2200

     protein: 24 hamburger + 32 chicken + 20 hot_dog
       + 4 fries + 12 macaroni + 15 pizza + 31 salad
       + 8 milk + 8 ice_cream >= 91

     fat: 26 hamburger + 10 chicken + 32 hot_dog + 19 fries
       + 10 macaroni + 12 pizza + 12 salad + 2.5 milk
       + 10 ice_cream <= 65

     sodium: 730 hamburger + 1190 chicken + 1800 hot_dog
       + 270 fries + 930 macaroni + 820 pizza + 1230 salad
       + 125 milk + 180 ice_cream <= 1779

    Generals
     hamburger chicken hot_dog fries macaroni
     pizza salad milk ice_cream
    End

    All of the variables in this model are integers, assuming that you have to consume discrete quantities of each.

    When I toss this model into ChatGPT, it correctly describes the problem as

    This optimization model aims to minimize the cost of a diet while meeting specified nutritional constraints. The objective function represents the total cost of selected food items, including hamburgers, chicken, hot dogs, fries, macaroni, pizza, salad, milk, and ice cream.

    The constraints ensure that the total calories, protein, fat, and sodium content of the diet fall within prescribed limits.

    The decision variables, specified under "Generals," represent the quantities of each food item to be chosen in the optimized diet. The model seeks the most cost-effective combination of foods that satisfies the nutritional requirements.

    The nutritional requirements are:

    • 2,200 calories for fewer
    • 91 protein or more
    • 65 fat or less
    • 1779 sodium or less

    While it's evident that this model falls short in addressing all dietary considerations and the reasonable usage of measurement units, it is an excellent illustration for our discussion.

    If you want to follow along, please save the above model as model.lp. Then, you can read in the model with \(\texttt{model = gp.read('model.lp')}\).

     

    The solution to the diet problem: Drink so much milk

    After solving this model with model.optimize(), it says you should have 12 glasses of milk, which sounds crazy. Like many diet solutions, this one seems pretty weird. Still, it's not as extreme as George Dantzig's diet solution to mostly drink vinegar (more details here).

    The total cost for this diet is $10.68 with the following nutritional values:

    • 1,200 calories
    • 96 protein
    • 30 fat
    • 1,500 sodium

    As you can see, none of these nutritional restrictions are at their limits. Shortly, we'll find out that this makes their shadow values not very helpful (I'm hinting at what's to come with foreshadowing of shadow values ;) ).

     

    How do we make this even more optimal ;)

    The question now is what can we relax in this model to best reduce the total cost below the optimal value of $10.68.  We will try each method above and see what we learn.

    Method 1: Fixed the model to get shadow values or duals

    As noted in my previous post, this is the fastest and least accurate method.  This is the method described in How do I retrieve the (dual) Pi values for a MIP problem?  Here is how I did this for this example:

    # Create and optimize the fixed model
    fixed = model.fixed()
    fixed.optimize()

    # Retrieve the pi values
    for c in fixed.getConstrs():
      print(f"The pi/shadow value for {c.ConstrName} is {c.pi}")

    This gives the following output:

    The pi/shadow value for calories is 0.0
    The pi/shadow value for protein is 0.0
    The pi/shadow value for fat is 0.0
    The pi/shadow value for sodium is 0.0

    All the shadow values are zero. If I wanted to use this to figure out which constraints to ease up on to improve the objective, it seems like relaxing constraints doesn't budge the goal. Of course, this doesn't hold for the entire MIP model. But for the fixed problem, it's spot-on since the solution has slack on every nutritional constraint so there is no benefit to relax it.

     

    Method 2: Relax each constraint of interest one-at-a-time

    We anticipate this to be slower yet more accurate since we're reworking the model by easing each constraint individually. For this discussion's sake, assume we can slightly ease up on each constraint without entirely eliminating it. To achieve this, I'll establish appropriate new relaxed nutritional values. Subsequently, I'll iterate through each constraint, relaxing it, and solve the updated model using the following code:

    # Define the 'suitable' relaxed values
    rhs_values = [['calories', 2200, 2700], ['protein', 91, 80], ['fat', 65, 75], ['sodium', 1779, 2000]]
    df = pd.DataFrame(rhs_values, columns=['Constraint', 'Original RHS', 'Relaxed RHS'])

    # Store the original solution
    original_obj = model.getObjective().getValue()

    # Loop through all constraints and relax them
    for i, row in df.iterrows():
        # make a copy of the original model
        tmp_model = model.copy()
        tmp_model.params.outputflag = 0
        
        # Relax the constraint
        constr = tmp_model.getConstrByName(row['Constraint'])
        constr.rhs = row['Relaxed RHS']
        
        # Solve the model and store data
        tmp_model.optimize()
        df.at[i,'New objective value'] = tmp_model.getObjective().getValue()
        df.at[i,'Solve Status'] = tmp_model.status
        df.at[i,'Empirical shadow value'] = (original_obj - tmp_model.getObjective().getValue()) / (row['Original RHS'] - row['Relaxed RHS'])

    This gives me the following table:

      Nutritional   Original   Relaxed   New objective   Solve    Empirical      
    Constraint    RHS        RHS      Value          Status   Shadow Value  
    -----------------------------------------------------------------------------
      calories      2200       2700      10.68           2       0            
      protein       91         80        8.72            2       0.178181818  
      fat           65         75        10.68           2       0            
      sodium        1779       2000      10.5            2       -0.00081448  

    This indicates that relaxing the constraints on calories and fat didn't alter the solution. On the flip side, relaxing the protein and sodium constraints does improve the objective value. Based on this data, I'd conclude that protein and sodium hold back the solution.

     

    Method 3: Create an infeasibility 

    For the third method, we will force this model to be infeasible by asking, "I want the objective to improve by 10%. What constraints prevent me from doing this?".  Once I have created an infeasible model, I compute an IIS (Irreducible Infeasible Subset) to determine a minimal set of constraints that conflict with my new constraint to improve the objective by 10%. For more information on computeIIS, see, How do I use 'compute IIS' to find a subset of constraints that are causing model infeasibility?

    Here is the code I used to do this:

    # Create a copy of the model
    forced_infeasible = model.copy()

    # Add a new constraint that forces the model to be infeasible
    obj_expression = forced_infeasible.getObjective()
    obj_value = model.getObjective().getValue()
    new_constr = forced_infeasible.addConstr(obj_expression <= obj_value*0.9,'improve_obj_by_10perc')

    # Force the new constraint into the IIS since we know it will be part of the scause of the infeasibility
    new_constr.IISConstrForce = 1

    # Solve the IIS
    forced_infeasible.computeIIS()
    forced_infeasible.write('iis.ilp')

    I get the following IIS as an iis.ilp file:

    Minimize
     
    Subject To
     protein: 24 hamburger + 32 chicken + 20 hot_dog + 4 fries + 12 macaroni
       + 15 pizza + 31 salad + 8 milk + 8 ice_cream >= 91
     sodium: 730 hamburger + 1190 chicken + 1800 hot_dog + 270 fries
       + 930 macaroni + 820 pizza + 1230 salad + 125 milk + 180 ice_cream
       <= 1779
     improve_obj_by_10perc: 2.49 hamburger + 2.89 chicken + 1.5 hot_dog
       + 1.89 fries + 2.09 macaroni + 1.99 pizza + 2.49 salad + 0.89 milk
       + 1.59 ice_cream <= 9.612
    Bounds
     salad free
     milk free
    Generals
     hamburger chicken hot_dog fries macaroni pizza salad milk ice_cream
    End

    This IIS has several bounds and three constraints: the new objective constraint, the protein limit, and the sodium limit. I would interpret this as an indication that the objective could see improvement if we eased up on either the protein or the sodium limit.  

     

    Conclusion

    For this model, Methods 2 and 3 agree that relaxing the protein and sodium limits would improve the objective.  Unfortunately, Method 1 does not yield meaningful results when answering "Which constraint(s) should I relax to improve the objective." This is not unsurprising because pi or shadow values are not well-defined for MIP models.  I would recommend taking an approach similar to Method 2 or 3.

    I hope this helps. Please let me know if I can clarify any points.

    1
  • Yash Puranik
    Gurobi-versary
    First Comment

    I love the detailed explanations with examples.

     

    Thank you Alison Cozad, this is a gem I am bookmarking for myself :-)

    0

サインインしてコメントを残してください。