Slow model building in gurobipy
回答済みHi,
on recommendation I switched my thesis project to the gurobipy interface but do have some trouble with the model building time. I work with very large sets and solve them on a cluster computer. However, I noticed that the model building scales very badly in the part of the variable declaration. I have 12 variables with mostly 3 dimensions (year -> 4, hour -> 8760, specific set). I add the variables like this:
A = model.addVars(Y, T, G, lb=0.0, ub = GRB.INFINITY, name="A")
For one year this still works great but if I add more it drives up memory usage like crazy and gets impossible to build.
The rest of the model then works fine again, except for two constraints. I'm wondering if there are ways and methods within gurobipy to speed the model building up. They do look like this:
model.addConstrs((
gp.quicksum(A[y, t, g] for g in dict1[n])
+ gp.quicksum(B[y, t, r] for r in helper1(n, value1))
+ gp.quicksum(C[y, t, d] for d in dict2[n])
- gp.quicksum(D[y, t, d] for d in dict3[n])
+ gp.quicksum(E[y, t, s] - F[y, t, s] for s in dict4[n])
- (G[y] * helper2(n,t, value2))
+ H[y,t,n]
== gp.quicksum(i_dict[n][l]*J[y, t, l] for l in L) - K[y,t,n] for n in N for t in T for y in Y))
model.addConstrs((gp.quicksum(x_multi_dict[l][c] * P[y, t, l] for l in L) == 0 for c in C for t in T for y in Y), name="Flow_const.")
All helper functions are already in dict form and already quite fast. Dicts are used for high speed too. All help is very much appreciated!
Paul
-
正式なコメント
This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?. -
Hi Paul,
What size are your specific sets? If they are not too large then I don't really see a reason why your constraints are generated slowly. The number of your constraints would be \(4\cdot 8760 \cdot |G_i|\), where \(|G_i|\) is the size of a given special set. Each constraint then has approximately, \(\sum G_i\) nonzeros so the building time and memory usage really depends on the sizes of your specific sets.
Could you provide a minimal working example? Maybe there is still something that can be done.
Best regards,
Jaromił0 -
Hi Jaromił,
thanks for the prompt reply. G for example is 2920 long. So in order to create the variable A, it takes about 10 minutes and consumes about 50GB of RAM at peaks.
I created a minimal example. Some things are further simplified but the sizes are represented.
import gurobipy as gp
from gurobipy import GRB
Y = range(4)
T = range(8760)
G = range(2920)
S = range(53)
R = range(1794)
L = range(3077)
N = range(2302)
C = range(776)
i_dict = {y:{x:0 for x in L} for y in N}
list1 = list(range(10))
x_multi_dict = {y:{x:0 for x in C} for y in L}
model = gp.Model("example")
A = model.addVars(Y, T, G, lb=0.0, ub = GRB.INFINITY, name="A")
B = model.addVars(Y, T, R, lb=0.0, ub = GRB.INFINITY, name="B")
C_variable = model.addVars(Y, T, L, lb =-GRB.INFINITY, ub=GRB.INFINITY, name="C")
D = model.addVars(Y, T, L, lb =-GRB.INFINITY, ub=GRB.INFINITY, name="C")
H = model.addVars(Y, T, N, lb=0.0, ub = GRB.INFINITY, name = "H")
K = model.addVars(Y, T, N, lb=0.0, ub = GRB.INFINITY, name = "K")
E = model.addVars(Y, T, S, lb=0.0, ub = GRB.INFINITY, vtype=GRB.CONTINUOUS, name="E")
F = model.addVars(Y, T, S, lb=0.0, ub = GRB.INFINITY, vtype=GRB.CONTINUOUS, name="F")
model.addConstrs((
gp.quicksum(A[y, t, g] for g in list1)
+ gp.quicksum(B[y, t, r] for r in list1)
+ gp.quicksum(C_variable[y, t, d] for d in list1)
- gp.quicksum(C_variable[y, t, d] for d in list1)
+ gp.quicksum(E[y, t, s] - F[y, t, s] for s in list1)
- (G[y] * 1)
+ H[y, t, n]
== gp.quicksum(i_dict[n][l]*D[y, t, l] for l in L) - K[y,t,n] for n in N for t in T for y in Y))
model.addConstrs((gp.quicksum(x_multi_dict[l][c] * D[y, t, l] for l in L) == 0 for c in C for t in T for y in Y), name="Flow_const.")Thanks and Best regards,
Paul
0 -
Hi Paul,
Your \(\texttt{A}\) variables alone have the size of over 100 million. It is thus totally expected that your model will need a lot of memory and time to build. Note that Gurobi has to store names, indices, bounds, and types of each variable.
You have multiple objects of similar size and the constraints you are trying to construct have an immense amount of nonzeros. I don't see a real workaround here except for reducing the size of your specific sets drastically.
You may want to change to a more advanced programming language such as C or C++ which may provide some speed ups, especially with extremely large models. Please note that simply changing to another language will not completely solve your memory/time problems.
Best regards,
Jaromił0 -
Hi Jaromił,
The sizes are enormous, I agree. However, it does work very good if I take just one year. I did not expect the scaling by a factor of 4 to make the difference to infeasibility.
Do you still see some possible improvements in the way I formulated the problem? Especially for the constraints?
Best regards,
Paul
0 -
Hi Paul,
If the required memory before was let's say 25GB then scaling by a factor of 4 will require a 100GB of memory given the large size for only 1 year already, scaling up will quickly run into out-of-memory problems.
Do you still see some possible improvements in the way I formulated the problem? Especially for the constraints?
You are using \(\texttt{list1}\) in every \(\texttt{quicksum}\) call. You can save time by writing just one longer \(\texttt{quicksum}\)
model.addConstrs((
gp.quicksum(A[y, t, z]+ B[y, t, z] + E[y, t, z] - F[y, t, z] for z in list1)
- (G[y] * 1)
+ H[y, t, n]
== gp.quicksum(i_dict[n][l]*D[y, t, l] for l in L) - K[y,t,n] for n in N for t in T for y in Y))Note that I removed the \(\texttt{C_variable}\) because they cancel out.
The second constraint looks fine except for the large set sizes.
Constructing a LinExpr from two lists via its constructor or the addTerms method is faster than quicksum. However, you will need to construct fitting lists first.
Best regards,
Jaromił0
投稿コメントは受け付けていません。
コメント
6件のコメント