'Python has stopped working' after called Gurobi to solve hundreds of large size MILP
AnsweredGreetings,
I need to generate 600 MILP instances and solve them by using Python calling Gurobi. Each of the MILP instance has a similar scale like this:
Optimize a model with 328593 rows, 485927 columns and 2346683 nonzeros
Variable types: 485640 continuous, 287 integer (286 binary)
I set 'model.Params.method = 2' to use the interior point method to reduce computing time (rather than Simplex). For most instances they can be solved within 20~40 seconds. So I set a time limit=180 seconds. If an instance cannot be solved to optimal in 180 seconds the code will skip that instance and randomly generate another instance and so on. But normally when Gurobi solved about 300 instances Python would say 'Python has stopped working' and 'Not Responding'. I googled and tried three things as the following:
(1) Move the Python file to a large disk (215GB free space). Windows 7 Enterprise, Intel core i7-4790 CPU @ 3.60GHz, RAM 32GB, 64-bit operating System.
(2) model.Params.NodefileStart = 0.5 to save the node file to the disk.
(3) model.Params.Threads = 2 to reduce the utilization of cores from as many as possible (4) to 2.
But the same problem still happened, i.e. Python stopped working after solving 100~300 large size problems.
I'm not sure whether I should adjust Gurobi or Python. Shall I clean the memory after I calculate each of the large size instance something else?
When I solved 600 smaller sized similar instances like this:
Optimize a model with 16683 rows, 23501 columns and 113093 nonzeros
Variable types: 23436 continuous, 65 integer (64 binary)
There was no problem.
Hope someone can give me some hints.
Thanks,
Larry
-
Official comment
This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?. -
Hi Larry,
Are you generating these instances all at the same time (in parallel) or one after another?
- If the first is the case you need a machine with lots of RAM. Disk space is not really interesting here.
- If the second is the case and you don't need the models anymore you should make sure to delete them. It is possible that the automatic garbage collector of Python has not yet freed the memory and you are simply running out of memory.
Best regards,
Sonja
0 -
Sonja,
Thank you for your reply.
I generated instances in series, i.e. generated one and solved it then generated and solved the next one. It means at anytime there's only one instance being processed.
Every time when I use
model = grb.Model()
to generate a new model I think the old one should have been deleted. Is that correct? Or is there any simple code to delete the old model completely?
Thanks,
Larry
0 -
Hi,
If you do it that way they should get deleted. However, you can also force a model
model
to be garbage collected with the statementdel model
Best regards,
Sonja
0 -
Thank you, Sonja. That's easy and helpful.
Larry
0
Post is closed for comments.
Comments
5 comments