Soft Constraints being treated as Hard Constraints?
AnsweredHi.
I am working on the Sports League Scheduling Issue and I am trying to solve it using an ILP approach. There are hard and soft constraints, where hard constraints cannot be violated and try to reduce the number of soft constraints violated. When trying a feasible solution, I notice that the soft constraints are being treated as hard constraints, thus producing an infeasible solution (See 2 images below)
An example of a hard constraint is as follows:
An example of a soft constraint is as follows:
Variable d_i,c, CA1 is a deviation variable which is calculated in the following way:
This is my code implementation at the moment for the hard and soft constraints mentioned above (including the Objective function where it has to reduce the penalty related to each constraint and:
constraint_411 = grb_model.addConstrs((quicksum(x[i,j,k] for j in team_ids if i != j for k in ca1hh.slots) <= ca1hh.max for ca1hh in CA1HH for i in ca1hh.teams), "Constraint 4.11")
i_CA1SH = []
for ca1sh in CA1SH:
for i in ca1sh.teams:
i_CA1SH.append((i,ca1sh))
di_CA1SH = grb_model.addVars(i_CA1SH)
for ca1sh in CA1SH:
for i in ca1sh.teams:
dev_model.addConstr(minimum == (k_RR  quicksum(x_ijs[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)), name="Min")
dev_model.addConstr(maximum == (quicksum(x_ijs[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)  k_RR), name="Max")
dev_model.addConstr(max_val == max_([minimum, maximum], constant=0), name="max")
dev_model.setObjective(max_val)
dev_model.update()
di_CA1SH[i,ca1sh] = max_val.Obj
constraint_413 = grb_model.addConstrs((quicksum(x[i,j,k] for j in team_ids for k in ca1sh.slots if i != j)  ca1sh.max <= di_CA1SH[i,ca1sh] for ca1sh in CA1SH for i in ca1sh.teams), "Constraint 4.13")
Objective Function:
grb_model.setObjective((quicksum(h[i,k] + a[i,k] for i in team_ids for k in slot_ids)
+ quicksum(ca1sh.penalty * di_CA1SH[i,ca1sh] for ca1sh in CA1SH for i in ca1sh.teams)
+ quicksum(ca1sa.penalty * di_CA1SA[i,ca1sa] for ca1sa in CA1SA for i in ca1sa.teams)
+ quicksum(ca2sha.penalty * di_CA2SHA[i,ca2sha] for ca2sha in CA2SHA for i in ca2sha.teams1)
+ quicksum(ca2sh.penalty * di_CA2SH[i,ca2sh] for ca2sh in CA2SH for i in ca2sh.teams1)
+ quicksum(ca2sa.penalty * di_CA2SA[i,ca2sa] for ca2sa in CA2SA for i in ca2sa.teams1)
+ quicksum(ca3sha.penalty * di_CA3SHA[i,ca3sha] for ca3sha in CA3SHA for i in ca3sha.teams1)
+ quicksum(ca3sh.penalty * di_CA3SH[i,ca3sh] for ca3sh in CA3SH for i in ca3sh.teams1)
+ quicksum(ca3sa.penalty * di_CA3SA[i,ca3sa] for ca3sa in CA3SA for i in ca3sa.teams1)
+ quicksum(ca4gsha.penalty * di_CA4gSHA[ca4gsha] for ca4gsha in CA4gSHA)
+ quicksum(ca4gsh.penalty * di_CA4gSH[ca4gsh] for ca4gsh in CA4gSH)
+ quicksum(ca4gsa.penalty * di_CA4gSA[ca4gsa] for ca4gsa in CA4gSA)
+ quicksum(ca4esha.penalty * di_CA4eSHA[ca4esha] for ca4esha in CA4eSHA)
+ quicksum(ca4esh.penalty * di_CA4eSH[ca4esh] for ca4esh in CA4eSH)
+ quicksum(ca4esa.penalty * di_CA4eSA[ca4esa] for ca4esa in CA4eSA)
+ quicksum(ga1s.penalty * d_GA1S[ga1s] for ga1s in GA1S)
+ quicksum(br1sha.penalty * d_BR1SHA[i,br1sha] for br1sha in BR1SHA for i in br1sha.teams)
+ quicksum(br1sh.penalty * d_BR1SH[i,br1sh] for br1sh in BR1SH for i in br1sh.teams)
+ quicksum(br1sa.penalty * d_BR1SA[i,br1sa] for br1sa in BR1SA for i in br1sa.teams)
+ quicksum(br2s.penalty * d_BR2S[br2s] for br2s in BR2S)
+ quicksum(fa2s.penalty * d_FA2S[i,j,fa2s] for fa2s in FA2S for i in fa2s.teams for j in fa2s.teams if i < j)
+ quicksum(se1s.penalty * dij_SE1S[i,j,se1s] for se1s in SE1S for i in se1s.teams for j in se1s.teams if i < j)), GRB.MINIMIZE)
So, my main questions are as follows:
 Is there a specific way to set up soft constraints? Because at the moment I am assuming that they are being distinguished as hard constraints and thus why I am getting an infeasible model.
 Is there a way to use 1 model i.e. only the grb_model rather than 2 models to calculate the deviation variables such as d_i,c, CA1 as shown above? Because I am using another model to get a value and think that such values should be dynamic.
Thanks in advance.
Oleg

Furthermore, I tried formulating this way:
i_CA1SH = []
for ca1sh in CA1SH:
for i in ca1sh.teams:
i_CA1SH.append((i,ca1sh))
di_CA1SH = grb_model.addVars(i_CA1SH)
for ca1sh in CA1SH:
for i in ca1sh.teams:
grb_model.addConstr(minimumg == (k_RR  quicksum(x_ijsg[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)), name="Min")
grb_model.addConstr(maximumg == (quicksum(x_ijsg[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)  k_RR), name="Max")
grb_model.addConstr(max_valg == max_([minimumg, maximumg], constant=0), name="max")
di_CA1SH[i,ca1sh] = max_valg
constraint_413 = grb_model.addConstrs((quicksum(x[i,j,k] for j in team_ids for k in ca1sh.slots if i != j)  ca1sh.max <= di_CA1SH[i,ca1sh] for ca1sh in CA1SH for i in ca1sh.teams), "Constraint 4.13")But I am still getting an infeasible solution.
0 
The Knowledge Base article How do I determine why my model is infeasible? should be helpful here.
0 
Hi. Thanks for this.
About a particular question I asked in the first post, how can I determine hard and soft constraints?
Oleg
0 
I am no expert on hard/soft constraints but I think that one way to model soft constraints is to add an additional free socalled slack variable (sometimes also called violation variable) which captures the violation of the constraint and add it with a penalty coefficient to the objective function. If in the optimal solution, this additional variable is positive, you know that the soft constraint is violated and you know by how much.
0 
Hi. Thanks for your reply.
How would you instantiate such slack variables?
Thanks
Oleg0 
Hi Oleg,
You define such variables just as normal optimization variables
slack = dev_model.addVars(I,J,K,name="slack")
and then use those in your soft constraints.
Don't forget to add a penalty for those slack in your objective. In case that you are minimizing, this could be something like \(\texttt{10e3 * slack[i,j,k]}\).
Best regards,
Jaromił0 
Hi.
How would I assign a slack variable to a constraint?
Thanks
0 
You have to add the slack variable to the constraint, e.g.,
x[4,0,5] + x[4,1,5] + x[4,2,5] + x[4,3,5] + x[4,5,5]  slack <= 1
You can see that if the \(\texttt{x}\) variables actually add up to \(>1\), the slack variable can compensate it.
Depending on the sign of your inequality, you have to either add or subtract a slack variable to catch the violation of the given constraint. For an equality constraint, you will require 2 slack variables and add one and subtract the other.
0 
So it would be along the lines of this:
i_CA1SH = []
for ca1sh in CA1SH:
for i in ca1sh.teams:
i_CA1SH.append((i,ca1sh))
di_CA1SH = grb_model.addVars(i_CA1SH)
for ca1sh in CA1SH:
for i in ca1sh.teams:
grb_model.addConstr(minimumg == (k_RR  quicksum(x_ijsg[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)), name="Min")
grb_model.addConstr(maximumg == (quicksum(x_ijsg[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)  k_RR), name="Max")
grb_model.addConstr(max_valg == max_([minimumg, maximumg], constant=0), name="max")
di_CA1SH[i,ca1sh] = max_valg
constraint_413 = grb_model.addConstrs((quicksum(x[i,j,k] for j in team_ids for k in ca1sh.slots if i != j)  di_CA1SH[i,ca1sh] <= ca1sh.max for ca1sh in CA1SH for i in ca1sh.teams), "Constraint 4.13")di_CA1Sh is the slack variable in this case
Also, can I assign values as I did here?
0 
This will not work. \(\texttt{di_CA1SH}\) is an optimization variable and you cannot just assign some value to it. It has to be used in a constraint.
grb_model.addConstr(max_valg + di_CA1SH_plus[i,ca1sh]  di_CA1SH_minus[i,ca1sh] == max_([minimumg, maximumg], constant=0), name="max")
Note that I used 2 \(\texttt{di_CA1SH}\) variables as slacks, because the above is an equality constraint.
0 
When doing what you suggested, it is giving me the following error:
Any ideas?
0 
Right, that one is on me.
You will have to introduce an additional auxiliary variable because general constraint are limited to exactly one single variable in an equality constraint.
max_valg_aux = grb_model.addVar() # defined with the same properties as max_valg
grb_model.addConstr(max_valg_aux == max_([minimumg, maximumg], constant=0), name="max")
grb_model.addConstr(max_valg + di_CA1SH_plus[i,ca1sh]  di_CA1SH_minus[i,ca1sh] == max_valg_aux, name="max_aux")0 
Hi again.
Upon inspection, I noticed that the model is being infeasible because of the following constraints:
grb_model.addConstr((ca1sh.min  quicksum(x[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)) == minimumg, name="Min")
grb_model.addConstr((quicksum(x[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)  ca1sh.max) == maximumg, name="Max")Somehow they are being added to the final solution while I just want them to be temporary so that I can get a value that I will pass to the max_ function. minimumg and maximumg are not being added to the objective function
Any ideas?
0 
Hi Oleg,
How did you define the variables \(\texttt{maximumg, minimumg}\)? You have to make sure that these are free variables.
maximumg = grb_model.addVar(lb=GRB.INFINITY, name="maximumg")
minimumg = grb_model.addVar(lb=GRB.INFINITY, name="minimumg")Best regards,
Jaromił0 
Hi after doing what you suggested, it is still giving me the same issue when looking at the ilp file
0 
I have added a GitHub link so that you can access the code: https://github.com/oleggrech7/Sports_Scheduling
0 
You need a different minimumg, maximumg variable for each constraint. Otherwise, you are forcing all equality constraints holding minimumg or maximumg to the same value. This means that you will have to introduce multiple minimumg and maximumg variables, namely one for each equality constraint of interest.
0 
Hi.
Thank you for the suggestion. But I am still getting that the model is infeasible.
Here is my updated code with another constraint:
# Constraints 4.13
max_valCA1SH = grb_model.addVar(lb=GRB.INFINITY, name="max_valCA1SH")
minimumCA1SH = grb_model.addVar(lb=GRB.INFINITY, name="minimumCA1SH")
maximumCA1SH = grb_model.addVar(lb=GRB.INFINITY, name="maximumCA1SH")
i_CA1SH = []
for ca1sh in CA1SH:
for i in ca1sh.teams:
i_CA1SH.append((i,ca1sh))
di_CA1SH = grb_model.addVars(i_CA1SH)
for ca1sh in CA1SH:
for i in ca1sh.teams:
grb_model.addConstr(minimumCA1SH == (ca1sh.min  quicksum(x[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)), name="Min")
grb_model.addConstr(maximumCA1SH == (quicksum(x[i,j,s] for s in ca1sh.slots for j in team_ids if i != j)  ca1sh.max), name="Max")
grb_model.addConstr(max_valCA1SH == max_([minimumCA1SH, maximumCA1SH], constant=0), name="max")
grb_model.addConstr(di_CA1SH[i,ca1sh] == max_valCA1SH, name="max_aux")
constraint_413 = grb_model.addConstrs((quicksum(x[i,j,k] for j in team_ids for k in ca1sh.slots if i != j)  ca1sh.max <= di_CA1SH[i,ca1sh] for ca1sh in CA1SH for i in ca1sh.teams), "Constraint 4.13")# Constraint 4.14
max_valCA1SA = grb_model.addVar(lb=GRB.INFINITY, name="max_valCA1SA")
minimumCA1SA = grb_model.addVar(lb=GRB.INFINITY, name="minimumCA1SA")
maximumCA1SA = grb_model.addVar(lb=GRB.INFINITY, name="maximumCA1SA")
i_CA1SA = []
for ca1sa in CA1SA:
for i in ca1sa.teams:
i_CA1SA.append((i,ca1sa))
di_CA1SA = grb_model.addVars(i_CA1SA)
for ca1sa in CA1SA:
for i in ca1sa.teams:
grb_model.addConstr(minimumCA1SA == (ca1sa.min  quicksum(x[i,j,s] for s in ca1sa.slots for j in team_ids if i != j)), name="Min")
grb_model.addConstr(maximumCA1SA == (quicksum(x[i,j,s] for s in ca1sa.slots for j in team_ids if i != j)  ca1sa.max), name="Max")
grb_model.addConstr(max_valCA1SA == max_([minimumCA1SA, maximumCA1SA], constant=0), name="max")
grb_model.addConstr(di_CA1SA[i,ca1sa] == max_valCA1SA, name="max_aux")
constraint_414 = grb_model.addConstrs((quicksum(x[j,i,k] for j in team_ids for k in ca1sa.slots if i != j)  ca1sa.max <= di_CA1SA[i,ca1sa] for ca1sa in CA1SA for i in ca1sa.teams), "Constraint 4.14")Any idea?
The code is updated on Git aswell.
0 
In the LP file Schedule.lp in your github, you are still using the same maximumCA1SH and minimumCA1SH variable for every equality constraint. As mentioned before, you have to introduce multiple of these variable, namely one for each equality constraint.
0 
Hi.
Thanks for your help! Worked like a charm. Wish I knew this beforehand.
0
Please sign in to leave a comment.
Comments
20 comments