Fast constraint rhs modification
AnsweredHi,
I am using gurobipy to repeatedly solve a model with changing constraints' righthand side. Right now, I am doing:
for i in range(number_of_constraints):
model.getConstrByName(constraint_name[i]).rhs += delta[i]
All those constraints are mixedinteger linear.
This works ok. However, I am modifying number_of_constraints = 10002000, this loop can take 2  4 ms which is a little slow for me. I wonder if there are some approaches or more efficient API functions to make it a bit faster? Does switching to C++ help?
Thank you!

Hi Xuan,
If the delta is the same for all constraints, then a good alternative would be to define a gurobipy variable called "delta", which you add to the RHS of every constraint when defining your model. You can fix it to a constant by setting the lower bound equal to the upper bound, and then update this "constant variable" to the value you need before each new solve which will be analogous to changing the RHS of every constraint.
 Riley
0 
Hi Riley,
Thanks for the reply! My apologies I forgot to mention that delta is computed for each constraint i hence will vary, but there might be a way that I can transform them to depend on a single variable to apply your brilliant trick.
In case I cannot, is there any other method that I can try?
Thank you!
0 
Hi Xuan,
You can just make your delta values a coefficient to this special variable in each constraint. Eg let's call the special variable D. Then for the ith constraint add delta[i]*D to the RHS. Initially set D to 0. Then on the next solve set it to 1, then the next set it to 2, and so on. This way you'll be able to increase the RHS by different amounts per constraint.
You could also modify all RHS at once rather than looping in Python (which is slow) by using Model.setAttr()  you will find an example for changing RHS on this page https://www.gurobi.com/documentation/current/refman/py_model_setattr.html
You may also want to consider whether our multiscenario feature would apply to your problem, as it could result in performance enhancement. In this case you predefine the different RHS values before optimization and then it is all handled in the one solve.
https://www.gurobi.com/documentation/current/refman/multiple_scenarios.htmlI'm not sure which is the fastest approach but if you test them all then let me know!
 Riley
0 
Hi Riley,
Thanks to your great trick, I am able to modify the constraint significantly faster. I added a continuous variable x shared by all constraints and set the upper bound and lower bound identical.
The original problem is a pure MIP that only has binary variables. Now when I am testing, it appears that the added continuous variable caused a 30% slowdown in speed. I guess it is because each node of the branchbound tree now turns on the linear solver due to the added continuous variable.
Is there a way to keep the fast constraint modification, while still keeping the fast solving speed due to pure binary problem?
Thank you!
0 
Hi Xuan,
I wouldn't expect the addition of the x variable to cause a slow down. Sometimes the addition of a variable could destroy some nice property of a model which made it fast to solve, but this should not be the case here. In this case presolve will create a new model that does not feature the continuous variable, it will recognize that it is essentially a constant term disguised as a variable. Can I suggest you run a test where you set the upper and lower bound on x to zero and compare runtimes to the original (perhaps with several different seeds to build some statistical confidence). You really shouldn't see a difference in these runtimes (or objective value if solving to optimality). You could also call model.presolve() on both original model, and the model with x = 0, then write the presolved models to an LP file and compare them  I'd expect them to be the same.
Also with respect to your comment
I guess it is because each node of the branchbound tree now turns on the linear solver due to the added continuous variable.
a solution to the linear relaxation is calculated at every node of the branch and bound tree during the MIP algorithm, whether there are continuous variables or not are present in the model.
 Riley
0 
Hi Riley,
Sorry when I made that comment my mind was not very clear.
I made a small test code using some data collected from my experiment, and still observe the difference of speed. The same thing happens when I put lb=ub=0.0. Here is one output of the solver:
Case with x0:
=======================================================================
Set parameter Seed to value 2
Presolve removed 769 rows and 25 columns
Presolve time: 0.00s
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)
Thread count: 20 physical cores, 20 logical processors, using up to 20 threads
Optimize a model with 769 rows, 25 columns and 6070 nonzeros
Model fingerprint: 0x3d242dc6
Variable types: 5 continuous, 20 integer (20 binary)
Coefficient statistics:
Matrix range [1e02, 8e+08]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+05]
RHS range [3e01, 2e+09]
Warning: Model contains large matrix coefficients
Warning: Model contains large rhs
Consider reformulating model or setting NumericFocus parameter
to avoid numerical issues.
MIP start from previous solve produced solution with objective 100000 (0.00s)
Loaded MIP start from previous solve with objective 100000
Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
Thread count was 1 (of 20 available processors)
Solution count 1: 100000
No other solutions better than 100000
Optimal solution found (tolerance 5.00e01)
Best objective 1.000000000000e+05, best bound 1.000000000000e+05, gap 0.0000%
=====================================================================Case without x0:
=====================================================================
Set parameter Seed to value 2
Presolve removed 769 rows and 21 columns
Presolve time: 0.00s
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (linux64)
Thread count: 20 physical cores, 20 logical processors, using up to 20 threads
Optimize a model with 769 rows, 21 columns and 3255 nonzeros
Model fingerprint: 0xbf7d9c44
Variable types: 1 continuous, 20 integer (20 binary)
Coefficient statistics:
Matrix range [1e+00, 8e+08]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+05]
RHS range [3e01, 2e+09]
Warning: Model contains large matrix coefficients
Warning: Model contains large rhs
Consider reformulating model or setting NumericFocus parameter
to avoid numerical issues.
MIP start from previous solve produced solution with objective 100000 (0.00s)
Loaded MIP start from previous solve with objective 100000
Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
Thread count was 1 (of 20 available processors)
Solution count 1: 100000
No other solutions better than 100000
Optimal solution found (tolerance 5.00e01)
Best objective 1.000000000000e+05, best bound 1.000000000000e+05, gap 0.0000%
=====================================================================I do see that presolve comes in and remove a few more columns. But the solving speed has a difference (0.48 ms with x0, and 0.31 ms without x0).
One thing I also notice is that the model has large coefficients. Does this contribute to the difference in speed? As I am trying to run this model at subms speed, does presolve also cause some slow down?
Thank you!
Also, I am not sure if it is proper to attach my code somehow in case you want to run the test.
0 
Hi Xuan,
One thing I also notice is that the model has large coefficients. Does this contribute to the difference in speed? As I am trying to run this model at subms speed, does presolve also cause some slow down?
Yes large coefficients, and in particular large ranges of coefficients (the difference between largest and smallest) can affect solver performance and stability of the solution. You can find an indepth discussion on this topic in Guidelines for Numerical Issues, or if you prefer this webinar.
As for your model speed, to convincingly say which one is faster you would need to run it many times, with different seeds, to build up statistical confidence in your conclusions. I am also curious about your model sizes:
Optimize a model with 769 rows, 25 columns and 6070 nonzeros
Optimize a model with 769 rows, 21 columns and 3255 nonzeros
If you were simply adding one variable, then I would not expect to see 4 extra columns, and almost extra 3000 nonzeros  I'd expect one extra nonzero for each row.
 Riley
0 
Hi Riley,
The variable x0 is a vector with 4 elements, that is why there are 4 additional columns. There are ~700 constraints, hence the additional 3000 nonzero terms.
I was trying to run the same experiment with several different seeds, as shown by the first row of my printout (please let me know if this is not the right way to seed). At least for the 8 seeds which I tried, the second model is consistently faster than the 1st one. I list the speed below (in ms):
With x0: [0.249, 0.565, 0.508, 0.454, 0.458, 0.465, 0.460, 0.481]
Without x0: [0.157, 0.320, 0.314, 0.303, 0.311, 0.306, 0.313, 0.310]I also tried 100 samples. The average speed of 1st model is 0.566 ms while 2nd model is 0.386 ms.
 Xuan
0 
Hi Xuan,
MIP start from previous solve produced solution with objective 100000 (0.00s)
Loaded MIP start from previous solve with objective 100000Are you always providing a MIP start, and does this provide values for every variable in your model?
 Riley
0 
Hi Riley,
I only build the model once and the later runs I modify the model. Therefore, I believe MIP start is automatically generated inside Gurobi.
 Xuan
0 
Hi Xuan,
I'd be interested to see the results of running
model.presolve().printStats()
for each of the two models.
This may gives some clues for the difference.
 Riley
0 
Hi Riley,
In addition to what I put down above, it seems both models give:
Statistics for modelPresolved:
Linear constraint matrix : 0 Constrs, 0 Vars, 0 NZs
Matrix coefficient range : [ 0, 0 ]
Objective coefficient range : [ 0, 0 ]
Variable bound range : [ 0, 0 ]
RHS coefficient range : [ 0, 0 ]Does this mean there may be some bug in my code?
 Xuan
0 
Hi Xuan,
This is a consequence of the model being solved in presolve. If the model is feasible, the optimal solution was determined by using the reductions, implications, and propagations that are performed during the presolve phase.
This is likely to indicate that your model is highly constrained. If this is surprising then perhaps there is a mistake in your formulation. If you provide code as a minimum reproducible example, with all necessary data to reproduce, I can take a quick look to see if anything seems unusual.
 Riley
0 
Hi Riley,
I am unsure how many constraints lead to a highly constrained problem, but my model is relatively small (has like 20 binary variables, hence can solve fast) and has more than 700 constraints. I tried to turn off presolve through:
setParam('PreCrush', 1)
setParam('Presolve', 0)
setParam('Cuts', 0)and did not notice a change of solving speed.
Regarding the minimum reproducible example, my script will be at least 200 lines as it contains 2 models and model updates, along with a small dataset. Should I hand over them through upload port such as https://www.gurobi.com/uploadamodel/?
 Xuan
1 
Hi Riley,
Could you confirm that I should submit the model through uploading port such as https://www.gurobi.com/uploadamodel/?
Thank you!
 Xuan
0 
Hi Xuan,
That link is used for uploading files for commercial tickets. You would need to paste the code here in the forum, and link to data, so that others can follow along. It doesn't need to be an exhaustive example, ideally a small example that illustrates your approach would be best. Riley
0 
Hi Riley,
My apologies for this super late reply. I have condensed my code but unfortunately, it still have ~150 lines.
I use the x0=0 initial condition and run for 10 times. This code has 2 models. The model model_master_with_x0 has the x0 variable. I update the constraints by changing the upper and lower bound of x0 to match new initial conditions (which is also 0s). The model model_master_no_x0 does not have x0. I simply remove all the constraints and add them back with the new x0.
The small dataset that I use can be accessed here: https://drive.google.com/file/d/1B1NzqOkxwd6jCYNQdVKC8IaeUfuHd8vG/view?usp=sharing
Since the model is simplified, all solutions of z_master are 0s, and it is infeasible for nonzero x0.
I have the following results from the last two lines of printing, if that matches what you have:
[0.2460479736328125, 0.5881786346435547, 0.45013427734375, 0.43487548828125, 0.43892860412597656, 0.4630088806152344, 0.4830360412597656, 0.45609474182128906, 0.4508495330810547, 0.45490264892578125, 0.4448890686035156] 0.44644962657581677
[0.13208389282226562, 0.36597251892089844, 0.2930164337158203, 0.2751350402832031, 0.3190040588378906, 0.3249645233154297, 0.3108978271484375, 0.30303001403808594, 0.31495094299316406, 0.31495094299316406, 0.3139972686767578] 0.29709122397682886Xuan
import numpy as np
import gurobipy as go
import pickle
import random
with open('feasible_cuts.pkl', 'rb') as f:
feas_cuts_offline = pickle.load(f)
# Constants ==================================================================================
N = 10; inf = 1e5; bigM = 1e3
len_x = 4; len_u = 1; len_lam = 2
# End: Constants =============================================================================
# initial x0
x0 = np.array([0.0, 0.0, 0.0, 0,0])
list_t_master_with_x0 = []
list_t_master_no_x0 = []
list_x0 = [np.array([0.0, 0.0, 0.0, 0.0])]*10
# z solutions
sol_with_x0 = []
sol_with_no_x0 = []
id_dual_x0_right = np.array([4, 5, 6, 7]); id_dual_x0_left = np.array([0, 1, 2, 3])
id_dual_lam_z = np.array([[ 84, 85], [106, 107], [128, 129], [150, 151], [172, 173], [194, 195], [216, 217], [238, 239], [260, 261], [282, 283]])
id_dual_x_lam_u_z = np.array([[ 86, 87], [108, 109], [130, 131], [152, 153], [174, 175], [196, 197], [218, 219], [240, 241], [262, 263], [284, 285]])
# Function to compute cutting plane, use or not use x0 =======================================
def compute_cutting_plane(param, x0_input, z_input):
this_inf_L_infeas = 0.0
for i_x in range(len_x):
this_inf_L_infeas += (param[id_dual_x0_right[i_x]]param[id_dual_x0_left[i_x]])*x0_input[i_x]
this_inf_L_infeas += sum((param[id_dual_lam_z[i_t, i_lam]]param[id_dual_x_lam_u_z[i_t, i_lam]])*z_input[i_t, i_lam]*bigM for i_lam in range(len_lam) for i_t in range(N))
return this_inf_L_infeas
# Master problem with x0 =====================================================================
model_master_with_x0 = go.Model("GBD_LCP_master")
# model_master_with_x0.setParam('OutputFlag', 0)
model_master_with_x0.setParam('MIPGap', 0.5)
model_master_with_x0.setParam('PreCrush', 1)
model_master_with_x0.setParam('Presolve', 0)
model_master_with_x0.setParam('Cuts', 0)
z_master_with_x0 = model_master_with_x0.addMVar((N, len_lam), vtype=go.GRB.BINARY, name="z_master")
z0_master_with_x0 = model_master_with_x0.addMVar((1, ), lb=inf, ub=inf, name="z0")
x0_master_with_x0 = model_master_with_x0.addMVar((len_x, ), lb=inf, ub=inf, name='x0_master')
for tt in range(N):
model_master_with_x0.addConstr(go.quicksum(z_master_with_x0[tt, i_lam] for i_lam in range(len_lam)) <= 1.0)
obj_master = z0_master_with_x0[0]
model_master_with_x0.setObjective(obj_master, go.GRB.MINIMIZE)
model_master_with_x0.update()
# End: Master problem with x0 ================================================================
# Master problem without x0 ==================================================================
model_master_no_x0 = go.Model("GBD_LCP_master")
# model_master_no_x0.setParam('OutputFlag', 0)
model_master_no_x0.setParam('MIPGap', 0.5)
model_master_no_x0.setParam('PreCrush', 1)
model_master_no_x0.setParam('Presolve', 0)
model_master_no_x0.setParam('Cuts', 0)
z_master_no_x0 = model_master_no_x0.addMVar((N, len_lam), vtype=go.GRB.BINARY, name="z_master")
z0_master_no_x0 = model_master_no_x0.addMVar((1, ), lb=inf, ub=inf, name="z0")
for tt in range(N):
model_master_no_x0.addConstr(go.quicksum(z_master_no_x0[tt, i_lam] for i_lam in range(len_lam)) <= 1.0)
obj_master = z0_master_no_x0[0]
model_master_no_x0.setObjective(obj_master, go.GRB.MINIMIZE)
model_master_no_x0.update()
# End: Master problem without x0 =============================================================
# Impose infeasible cut constraints : with x0 ================================================
for i_p in range(len(feas_cuts_offline)):
param = feas_cuts_offline[i_p]
this_inf_L_infeas = compute_cutting_plane(param, x0_master_with_x0, z_master_with_x0)
model_master_with_x0.addConstr(this_inf_L_infeas >= 0.0)
for i_x in range(len_x):
x0_master_with_x0[i_x].ub = x0[i_x]
x0_master_with_x0[i_x].lb = x0[i_x]
model_master_with_x0.update()
# End: Impose infeasible cut constraints : with x0 ===========================================
# Impose infeasible cut constraints : w/o x0 =================================================
cuts_without_x0 = []
for i_p in range(len(feas_cuts_offline)):
param = feas_cuts_offline[i_p]
inf_L_infeas = compute_cutting_plane(param, x0, z_master_no_x0)
cuts_without_x0.append(model_master_no_x0.addConstr(inf_L_infeas >= 0.0))
# End: Impose infeasible cut constraints : w/o x0 ============================================
model_master_with_x0.optimize()
list_t_master_with_x0.append(1000*model_master_with_x0.Runtime)
sol_with_x0.append(np.array(z_master_with_x0.X))
print("")
model_master_no_x0.optimize()
list_t_master_no_x0.append(1000*model_master_no_x0.Runtime)
sol_with_no_x0.append(np.array(z_master_no_x0.X))
# Update x0 and solve ========================================================================
for i_x0 in range(len(list_x0)):
print("======================================================================================")
x0_new = list_x0[i_x0]
for i_x in range(len_x):
x0_master_with_x0[i_x].ub = x0_new[i_x]
x0_master_with_x0[i_x].lb = x0_new[i_x]
model_master_with_x0.update()
for cuts in cuts_without_x0:
model_master_no_x0.remove(cuts)
cuts_without_x0 = []
# Reimpose constraints
for i_p in range(len(feas_cuts_offline)):
param = feas_cuts_offline[i_p]
inf_L_infeas = compute_cutting_plane(param, x0_new, z_master_no_x0)
cuts_without_x0.append(model_master_no_x0.addConstr(inf_L_infeas >= 0.0))
ss = random.randint(0,100)
model_master_with_x0.setParam('Seed', ss)
model_master_with_x0.presolve().printStats()
# p = model_master_with_x0.presolve()
# model_master_with_x0.write("with_x0.mps")
model_master_with_x0.optimize()
list_t_master_with_x0.append(1000*model_master_with_x0.Runtime)
sol_with_x0.append(np.array(z_master_with_x0.X))
print("")
ss = random.randint(0,100)
model_master_no_x0.setParam('Seed', ss)
model_master_no_x0.presolve().printStats()
# p = model_master_no_x0.presolve()
# model_master_no_x0.write("with_no_x0.mps")
model_master_no_x0.optimize()
list_t_master_no_x0.append(1000*model_master_no_x0.Runtime)
sol_with_no_x0.append(np.array(z_master_no_x0.X))
# End: Update x0 and solve ===================================================================
print(list_t_master_with_x0, np.average(np.array(list_t_master_with_x0)))
print(list_t_master_no_x0, np.average(np.array(list_t_master_no_x0)))0 
Hi Xuan,
I've been away but happy to make some time this week to have a look at this. I can't access pkl files for security reasons. Can you upload the dataset as a CSV? Riley
0 
Hi Riley,
Does this work: https://drive.google.com/file/d/1vSNy0fVahVJsfrReZzjI9lfIXBEe7sUW/view?usp=sharing?
I use the following code to read it into list of np.array:
import csv
feas_cuts_offline = []
with open('feasible_cuts.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=' ', quotechar='')
for row in reader:
feas_cuts_offline.append(np.array([float(row[ii]) for ii in range(len(row))])) Xuan
0 
Hi Xuan,
I can confirm it works. I'll dive into the output on the weekend if I don't find time before.
 Riley
0 
Hi Xuan,
I've taken a look at the code  it is a little difficult to understand what it is doing without pouring lots of time into it, but I want you to consider the following parts of your code:list_x0 = [np.array([0.0, 0.0, 0.0, 0.0])]*10
...
for i_x0 in range(len(list_x0)):
x0_new = list_x0[i_x0]
for i_x in range(len_x):
x0_master_with_x0[i_x].ub = x0_new[i_x]
x0_master_with_x0[i_x].lb = x0_new[i_x]I think this is a mistake  the value of x0_new[i_x] is always 0. Try inserting this code afterwards to see this:
for v in model_master_with_x0.getVars():
print(v.VarName, v.lb, v.ub)Consequently, I suspect that neither model changes as the loop iterates, which I would guess is not what you intended.
Indeed it looks like both models are solved during presolve (when presolve is on) and there is a tiny difference in runtime between them. I guess the tiny difference can be attributed to the fact that the model with extra variables has a bit more work to do in presolve. I expect if the runtimes were larger then the difference between the model runtimes would be negligible, and this is what I had in mind when I originally suggested that adding "fixed" variables would not affect the runtime  I had mistakenly assumed the run times were a bit more substantial.When presolve is off, we still see a slightly larger difference (but ultimately still tiny). It's hard to comment on this as the model is essentially loading the previous MIP start, and then presumably verifying that the MIP start is optimal (since the models don't appear to be changing) without doing any "solving" per se.
A small suggestion for your implementation that I picked up on along the way:
The followingfor tt in range(N):
model_master_no_x0.addConstr(go.quicksum(z_master_no_x0[tt, i_lam] for i_lam in range(len_lam)) <= 1.0)is equivalent to
for tt in range(N):
model_master_no_x0.addConstr(z_master_no_x0[tt, :].sum() <= 1)which is equivalent to
model_master_no_x0.addConstr(z_master_no_x0.sum(axis=0) <= 1)
 Riley
0 
Hi Riley,
Thanks for the feedback and suggestions! I apologize for the complicated code, as I tried to keep the structure of the model to represent the original one.
The x0_new[i_x] is always 0  this is indeed intended, as I follow one of your earlier suggestions to make lb=ub=0 and solve with various seeds to ensure results are consistent:
ss = random.randint(0,100)
model_master_no_x0.setParam('Seed', ss)I agree with you that the time difference comes from adding additional variables, as the original model is small. It bothers me slightly as I am trying to push the performance, but not significantly. If this becomes a major slowdown, I will try alternative approaches.
 Xuan
0 
Hi Xuan,
Coming back to your very first questions, if we have a MILP defined by a \(m\) constraints and \(n\) variables:\[Ax \leq b\]
and you want to create a series of related models \(M(p)\) by adjusting the RHS:
\[Ax \leq b + p\delta\]
where \(\delta \in \mathbb{R}^m\) and \(p = 1,2,\ldots \) then the idea is to create a single model \(M'(p)\)
\[\begin{eqnarray}Ax  y\delta \leq b\ \\ y = p \end{eqnarray}\]
where \(y\) is a continuous variable. Setting
y.lb = y.ub = p
gives us the \(y = p\) constraint and I'd expect \(M(p)\) to be equivalent to \(M'(p)\) after presolve.
If you are setting the lower and upper bounds to zero each time, then this doesn't seem to be following the intent of what was suggested. Hopefully the above makes it clearer.
 Riley
0 
Hi Riley,
Yes, what you said is correct, that is why the code I put here is a simplified model for debugging purposes. I have the complete model (~400 lines) that sets lb=ub to various nonzero values. Since I remove extra things in the model, somehow it is only feasible for lb=ub=0. But that shouldn't matter as the idea of that code is to show that a model without y runs faster than a model with y and y.lb=y.ub, and the difference in speed is the same no matter y.lb=y.ub=0 or not.
Xuan
0 
Ah I see, that is good to hear. Best of luck with the project!
 Riley
0
Please sign in to leave a comment.
Comments
25 comments