Non-preemptive goal programming in Gurobipy
Awaiting user inputHi,
I am having the following setup: First I run my optimization model to minimize only the slack penalties. Then, the slack penalties are included as upper bound to optimize the original objective function as the next and final step. For some reason, the overall objective function value from the goal programming is significantly worse than the original MILP model.
Model.setObjective(slack_obj,GRB.MINIMIZE)
Model.Optimize()
#Retrieve the slack variable values
Slack_Var.UB = Target_Value
Model.reset(0)
Model.setObjective(obj_expr-slack_obj, GRB>MAXIMIZE)
Model.Optimize()
While the optimal obj value from original LP model is 1.395, the optimal value in this hierarchical model is 1.411. Am I missing something here? I do not want a warm start for the second model.
-
Could you please share a minimal working example to reproduce the issue? Or at least please share the logs showing the different behavior.
Are you performing any other changes to the model except for the objective? Do you solve up to a 0% MIPGap? Please note that depending on the numerical properties of your model, tolerance may play a significant role.
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
1 comment