Balancing penalty weights when relaxing constraints
AnsweredI have a blended model with multiple goals and a number of constraints.
Some of the constraints can be relaxed if the model is infeasible. Which it frequently is.
The goals are (sadly) one - two orders of magnitude apart, so a small change to one can make the other irrelevant without normalizing somehow.
I can make the goals (kinda, sorta) equivalent in importance by normalizing the weights in the sum of linexps that make up the target using the maximum possible values I calculated for each. It's not perfect, and if there's a better way to do it I'm all ears, but it's good enough for this problem.
I can't seem to figure out a way to do the same for relaxing the constraints in a way that makes them equally valuable. Is there something ootb that can help with this at all?
A concrete example might help.
So let's say I'm buying widgets from 3 suppliers A,B,C. Stocks at all 3 are constrained. Prices are different at all 3.
I have two conflicting requirements -
1. minimize my spend for a set number of widgets.
2. try to get a selected split across suppliers. So (say) a gets 25%, B gets 45% and C gets the rest. Count of widgets bought not spend (and this is important as this is a contrived example).
I modelled (2) as sum of widgets bought and constrained it to the target percentage. Clearly the two goals are incompatible.
In this case the constraints can both be relaxed. I'm trying to figure out a way to make the two sets of roughly equivalent import using the rhspen[] values in feasRelax.
This is a contrived example for illustration. My problem has more than two sets of constraints that compete, so I'm looking for something generic if it exists.
-
This sounds like a good candidate for hierarchical objectives - see https://www.gurobi.com/documentation/8.1/refman/multiple_objectives.html .
0 -
And that's how I modeled it the second go around.
The bit I don't quite get though - and I'm sure I'm missing something obvious - is whether the difference in the size of the calculated objective functions matters.
So if I have two objectives a and b. a maxes at 10^3 and b maxes at 10^6 (say) how do I say a is 2x the weight (importance) of b?Would it be 2.0 or 2*10^6/10^3?
And if the second of the two how would I pull these numbers to set the weights.It's something that's not well covered in the docs (imo). Or if it is I'm clearly not getting it...
0 -
With a hierarchical objectives (priorities), Gurobi solves the model with the highest priority objective, then fixes that as a constraint and solves the model with the next priority objective, etc. At each step, it uses warm start information.
With this, don't put a large weight on any objective; keep the magnitude reasonable.
The objective constraints need not be strict; you can give a bit of tolerance when they become constraints.
This should do exactly what you want.
0
Please sign in to leave a comment.
Comments
3 comments