Managing high number of constraints.
AnsweredHi team Gurobi!
I would like to introduce a huge number of constraints in my model which permits to linearize the problem (about 18 million), but the loading in pyomo is very very slow (my code stops for a long time when generating them).
model.fj and model.lin are two binary variables: model.lin is equal to 1 only if the product model.fj [f,conn[c,1]] * model.fj [f,conn[c,2]] * model.fj [f,s] is equal to 1
f is defined in a range between 1 and 37
c is defined in a range between 1 and 1440
s is defined in a range between 1 and 384 (350 of nonzeros)
How can I manage the constraint in a better way?
Thanks,
Gianluca.
def linearizzazione_rule(model, f, c, s):
if nu[s,1] != 0:
return model.lin[c,s] >= model.fj[f,connect[c,1]] + model.fj[f,connect[c,2]] + model.fj[f,s]  2
else:
return Constraint.Skip
model.linearizzazione_constr = Constraint(
model.FD, model.CL, model.SSN,
rule = linearizzazione_rule
)

Hi Gianluca,
There is no good way to reformulate the constraint. The problem is that each variable holds a different index, i.e., even if the constraint would read \(x_{c,s} \geq x_{f,c}\cdot x_{f,s}\), you would still need the 18 million constraints, because you have to iterate over all \(c,f,s\) combinations. Adding a large number of constraints almost always results in long construction times. However, you could try some of the following:
 Reduce the number of \(c\) indices
 You could try to perform some presolving on your own, i.e., you could try to determine whether some of the variables are 0 or 1 if you have some additional knowledge
 You could try using Gurobi's Python API instead of Pyomo to avoid additional overhead possible added by Pyomo
 You could generate the model once and write it to an MPS or LP file, see Model.write(). Subsequently, you can compress the file with, e.g., \(\texttt{bz2}\), and read the model instead of constructing the model every time
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
1 comment