How to increase the speed when calling gurobi in a loop？
AnsweredDear all
I’m currently modeling my problem in python. I need to call gurobi many times to solve QP or nonconvex problems, and every time I will use different parameters. What can I do to decrease the computing time? The question is related two aspects: (1) How to improve the speed of calling gurobi one time to solve the problem? (2) How to reduce the total calculation time caused by multiple calls？
Thank you and best regards

Hi,
In order to improve the solution speed of Gurobi, we recommend having a look at the most important parameters and experimenting with them in order to improve solution speed. You mentioned that you are already using different parameters, which parameters are you altering in between runs?
Regarding multiple calls, if your calls are independent, you could try to parallelize your approach.Since you are solving quadratic programs, numerics and model formulation play an important role. What are the ranges of your matrix coefficients? Could you provide parts of a LOG file for a problem of which you think that Gurobi could do better, e.g., the coefficient ranges and the barrier log?
Best regards,
Jaromił1 
Hi,
Thank you very much！
The parameters I alter in between runs is some constant terms in the constraints, and the difference between them affects the speed of every single run slightly, which can be ignored.
And I am trying to parallelize my approach, but I haven't succeeded, could you please show me some sample codes? (I have tried the method seen from this community, and here I show the error information. I check this error for many times, and I don't miss a required argument, so I'm confused at why this error happens.)
Here, I show part of the LOG file for my problem.
I am looking forward to your reply!
Best regards,
Linzhi Jiang0 
Hi,
I want to add some information that I need to call Gurobi for more than 1,000 times, so I need to reduce the total calculation time caused by the multiple calls.
Best regards,
Linzhi Jiang0 
Hi Linzhi,
Just so I understand correctly. Each Gurobi run takes ~0.5 seconds (your log shows 0.38) which I don't think we can reduce much. But the long computation times come from the fact that you need to solve the model so often? If so, then parallelization is the answer.
In Python you could try using the multiprocessing package. However, please note that each Gurobi run uses multiple Threads when using the barrier method. It may be required that you limit the number of Threads for each process.
A different approach would be to try to reduce the number of models you have to solve by performing some preprocessing on your own. E.g., are there ways to tell whether a model will be feasible/infeasible without starting the optimization? Do you have to solve all alteration of the model or can you skip some of them due to information you gain from other runs?Best regards,
Jaromił0 
Hi,
Thank you very much！
You mentioned that each Gurobi run uses multiple Threads when using the barrier method and it may be required that I limit the number of Threads for each process. And could you please tell me the reason of this?
And I have to do the a complete analysis, so I have to solve all alteration of the model, and all the models should be feasible to solve, so is this method (reducing the number of models you have to solve by performing some preprocessing on your own) still feasible?
Best regards,
Linzhi Jiang0 
Hi Linzhi,
Generally Gurobi tries to utilize all Threads during an optimization run. If you want to parallelize your process on one machine, it could happen that the processes intercept each other. It is recommended to limit the number of Threads of each process, in best case provide the same number of Threads to each process. If you are using multiple machines to parallelize your optimization, then you usually don't have to set the Threads parameter.
If you are certain that you have to solve all alterations of the model, then the idea of reducing the number of models by analysis is not applicable.
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
6 comments