Error 10001 : Out of memory
AnsweredI have an academic licence for Gurobi and work in Python (on PyCharm). I am solving a huge number of assignment problems and sometimes encounter an out of memory error when solving an assignment problem (see below).
Set parameter Username
Set parameter LogFile to value "gurobi.log"
Academic license - for non-commercial use only - expires 2024-01-04
Using license file C:\Users\innocente\gurobi.lic
Gurobi Optimizer version 10.0.2 build v10.0.2rc0 (win64)
Copyright (c) 2023, Gurobi Optimization, LLC
Read LP format model from file C:\Users\INNOCE~1\AppData\Local\Temp\92fa4f2a41464dcdbb902fbd0016aaff-pulp.lp
Reading time = 0.03 seconds
OBJ: 160 rows, 6400 columns, 12800 nonzeros
CPU model: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz, instruction set [SSE2|AVX|AVX2]
Thread count: 2 physical cores, 4 logical processors, using up to 4 threads
Optimize a model with 160 rows, 6400 columns and 12800 nonzeros
Model fingerprint: 0xcae5e3e6
Variable types: 0 continuous, 6400 integer (6400 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [2e+00, 1e+04]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 1e+00]
Found heuristic solution: objective 105627.00000
Presolve removed 110 rows and 5779 columns
Presolve time: 0.01s
Explored 0 nodes (0 simplex iterations) in 0.01 seconds (0.00 work units)
Thread count was 1 (of 4 available processors)
Solution count 1: 105627
No other solutions better than 0
Best objective 1.056270000000e+05, best bound -, gap -
Unable to retrieve attribute 'X'
Error 10001: Out of memory
Traceback (most recent call last):
File "C:\Users\innocente\PycharmProjects\pythonProject\main_bis_time_vl_TW.py", line 369, in <module>
Backwards_Gradients_Calculation()
File "C:\Users\innocente\PycharmProjects\pythonProject\main_bis_time_vl_TW.py", line 246, in Backwards_Gradients_Calculation
v[k, t, n, m] = C_case2 - C
~~~~~~~~^~~
TypeError: unsupported operand type(s) for -: 'NoneType' and 'float'
What can I do in order to avoid this error ?
-
Hi Emma,
Usually we would direct people to the following article in our Knowledge Base:
How do I avoid an out-of-memory condition?However this does not look like a typical situation where we see out of memory conditions. Your model is quite small and (unless you have omitted it) there is no node log information, i.e. no branch and bound tree, which is where these errors typically occur.
Is the memory error occurring in a callback function that you have written? How much RAM does your machine have?
- Riley
0 -
Hi Riley,
Thank you for your answer !
I had already read the article in the Knowledge Base, but as you said, I didn't really see how it could apply to my model.
Yes, indeed, the memory error occurs in a callback function (I'm calling the function inside another function). My machine has 16 gb RAM.
Thank you very much for your help !
Emma
0 -
Hi Emma,
Given the error is happening inside a callback I'd expect the out of memory error is not related to Gurobi.
I think we'd need to see what is happening inside the callback in order to try and assist, feel free to paste your code or a link to it.
- Riley
0 -
Hi Riley,
I've just edited my previous post ! Sorry for the inconvenience.
Emma
0 -
Hi Emma,
Thanks for posting the code.
I am reasonably confident that the issue is not with Gurobi here. The log file you posted shows a very small model - not one that would result in a memory error on a 16gb machine. Perhaps you could insert some print statements into your code to print out the size of the models being solved? If they're all of a similarly small size then Gurobi won't be the cause.
Sometimes generating many models, like you are doing, results in a memory error if the model is not freed from memory, or "disposed" of properly, after it is has been solved. Because PuLP is calling the Gurobi solver via the command line I don't think this could be occurring for the Gurobi models. Could it be happening for the PuLP models? Perhaps, but I am not that familiar with PuLP and don't know how or when it frees the models, but I would try using Python's del statement on your models, after solving them and retrieving what you need just in case.
I think it would be reasonably quick to replace PuLP in your code with gurobipy (or Pyomo if you want to stay solver agnostic), so maybe this is worth trying to see if it makes a difference.
- Riley
0 -
Hi Riley,
Thanks a lot for your help !
If I replace PuLP with Gurobipy, will it automatically free the models from memory (without having to add the del statement)?
Emma
0 -
Hi Emma,
If you replace PuLP with Gurobipy then you would not use the del statement.
Making sure the model is disposed of can either be done via Model.dispose() or even better using context managers (and then no such dispose statement is needed):
with gp.Model("Assignment_Problem") as model1:
# define model
m = ...
m.optimize()
# no del or dispose statement neededThe main advantage of switching to gurobipy in this case is that we are experts in gurobipy, not PuLP, so are better positioned to help.
- Riley
0 -
Hi Riley,
Thank you for your valuable help !
Could it also be responsible for this kind of errors :
Traceback (most recent call last):
File "C:\Users\innocente\PycharmProjects\pythonProject\main_bis_time_vl_TW.py", line 364, in <module>
Backwards_Gradients_Calculation()
File "C:\Users\innocente\PycharmProjects\pythonProject\main_bis_time_vl_TW.py", line 240, in Backwards_Gradients_Calculation
C_case2,Y_case2 = C_Nt(N_R_temp, N_L_temp)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\innocente\PycharmProjects\pythonProject\main_bis_time_vl_TW.py", line 105, in C_Nt
static_assignment_problem.solve(GUROBI_CMD(msg=True))
File "C:\Users\innocente\PycharmProjects\pythonProject\venv\Lib\site-packages\pulp\pulp.py", line 1913, in solve
status = solver.actualSolve(self, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\innocente\PycharmProjects\pythonProject\venv\Lib\site-packages\pulp\apis\gurobi_api.py", line 397, in actualSolve
raise PulpSolverError("PuLP: Error while trying to execute " + self.path)
pulp.apis.core.PulpSolverError: PuLP: Error while trying to execute gurobi_cl.exeProcess finished with exit code 1
Thanks,
Emma
0 -
Hi Emma,
The error suggests that PuLP couldn't run Gurobi for some reason. Unfortunately it does not give any clues as to what this reason could be.
You can run Gurobi from the command line using the following:
gurobi_cl model_file.mps
If you don't have an mps file you can grab one from inside the Gurobi folder. Eg on Mac you can find a couple of mps files inside the /Library/gurobi1002/macos_universal2/examples/data/ folder. The gurobi1002 may be named differently depending on what version you are using. Windows will have a similar Gurobi folder.
Now if running the above gurobi_cl command doesn't work then Gurobi hasn't been setup correctly. Can you let me know how you go with this?
- Riley
0 -
Hi Riley,
When I ran the command you mentionned on an mps file I had, here is what I got :
If I stop using PuLP and instead use gurobipy, do you think it would prevent those errors ?
Thanks,
Emma
0 -
Hi Emma,
Looks like your Gurobi install is setup correctly. PuLP isn't going to give us any more information so it'll be hard to fix without knowing what the problem is.
At least if you use gurobipy, and you encounter an error, you will get a meaningful error message which can help you fix the issue.
- Riley
0 -
Hi Riley,
I will follow your advice and start using gurobipy. Could you confirm that I just have to import the gurobipy package in order to start using it ? Nothing else is required ?
To be sure I understood correctly, at the end of the following bloc of instructions, the model will be freed automatically ?
Also, if I want to return, let's say the value of the objective function, the return statement must be inside the bloc of instruction or outside ?
Thanks,
Emma
0 -
Hi Emma,
Yes, you can use
import gurobipy as gp
in your code and I expect it to work for you. Some users experience licencing issues that need troubleshooting but given your command line Gurobi worked ok I expect you will be fine.The context manager will handle freeing the model, so you are correct in your understanding.
You can use a return statement wherever you like inside a function, however if you're returning a value from the model then it will need to be inside the context manager block (because the model doesn't exist outside it).
- Riley
0 -
Hi Emma,
Any chance you can upload your excel file to a cloud drive, such as Dropbox, so I can reproduce your run?
- Riley
0 -
Hi Riley,
My data file is just a 100 by 100 matrix of randomly generated numbers between 0 and 10000 (=randbetween(0;10000) on Excel).
If you still want it, can you tell me how to send it to you?
It works without any issue for problems until 50x50 but when the matrix becomes larger, I always encounter memory issues.
Emma
0 -
Hi Emma,
No worries, I can work without this file.
I suspect that after 48 hours your data structures where you are recording data, in particular my_dict_aff have grown quite large and they are tying up memory.
My next suggestion is to write some of these values to file instead of holding them in memory, eg
def Backwards_Gradients_Calculation():
with open('my_dict_aff.csv', 'a') as my_dict_aff:
global t
while t > 0:
...then instead of
my_dict_aff["Resources"].append(n)
my_dict_aff["Task"].append(m)
my_dict_aff["Time"].append(t)
my_dict_aff["Iteration"].append(k)
my_dict_aff["Assignment"].append(Y)use
my_dict_aff.write(f"{n},{m},{t},{k},{Y}")
- Riley
0 -
Hi Riley,
Thank you for your answer, I will try it immediately and keep you posted !
Do you think I could have a similar issue with
v = np.zeros((K, T, len(full_R), len(full_L)))
Do you think it takes a lot of memory, like my_dict_aff ?
Thanks for your answer !
Emma
0 -
hmm, not sure. What are its dimensions? It doesn't grow over time like my_dict_aff, and it looks like values may get overwritten so I'm guessing it may be ok?
0 -
The dimensions are 100x10x100x100 and currently, the values are not overwritten.
0 -
They're being overwritten in the sense that you create the v array with zeros, then overwrite the zeros.
If we assume 8 bytes per value in the v array then this will be 100x10x100x100*8/1024/1024 MB = 76.3MB, so I don't expect there would be an issue there.
You can always get the size of a numpy array, in bytes, with the .nbytes attribute:my_array.nbytes
or the size of dictionary in bytes with the __sizeof__ method:
my_dict.__sizeof__()
- Riley
0 -
Hi Riley !
Thank you very much for your help ! I managed to prevent memory issues by paying attention to how I recorded data !
I would like to model the following (modeled using PULP) constraint with gurobipy :
time_R = [2 for i in full_R] time_L = [(0,2) for i in full_L]
w_L = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19]] w_R = [[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11], [12, 13, 14, 15], [16, 17, 18, 19]]
L = [[0, 1], [4, 6, 7], [8, 9, 10, 11], [14, 15], [16, 17, 18]] R = [[0, 1, 3], [6, 7], [8, 9, 10], [12, 13, 14, 15], [16, 17, 18, 19]]
for l in L[t]: for r in R[t]: for i in w_R: for j in w_L: if r in i and l in j: assignment_problem += x[r, l] <= max(0, ( (w_R.index(i) + time_R[r]) - (w_L.index(j) + time_L[l][0])) + 1)
Do you know if it is possible with the multiple for and if ?
Thanks a lot for your help !
Emma
0 -
Hi Emma,
No problem, it is indeed possible. There is nothing that PuLP can do that Gurobi can't (the opposite is not true). Note you will need to introduce temporary variables for you expression as the arguments to gurobipy.max_ need to be variables (and an optional constant).
m = gp.Model() # Your model
for l in L[t]: for r in R[t]: for i in w_R: for j in w_L: if r in i and l in j:
tmp = m.addVar(lb=-float("inf"))
m.addConstr(tmp == (w_R.index(i) + time_R[r]) - (w_L.index(j) + time_L[l][0]) + 1)
m.addConstr(x[r, l] <= gp.max_(tmp, constant=0))- Riley
0
Please sign in to leave a comment.
Comments
22 comments