MILP Parallelism
AnsweredHi,
I am trying to make my MILP model faster by parallelism using multiple cores (around 30) in a supercomputer.
After reading articles and watching videos, my understanding is that Gurobi uses all available cores by default so my questions are:
1. Is it possible to improve the computational time by assigning more cores to the solver in my own PC?
2. Do I need to make any changes to the main code when transferring it to the supercomputer?
3. Is there a chance that the MILP runs faster on the supercomputer or there will be no improvement unless major revisions are done to the model?
Sincerely,
-
Hi Nastaran,
- Yes, you are right. The default value for the Threads parameter is 0 where Gurobi uses up to all the available logical processors on the machine. There is a soft limit of 32 threads under the default setting. If the number of logical processors on a machine is more than 32, Gurobi will only use up to the number of physical cores threads unless the parameter Threads is explicitly set to a higher value. For example, if a machine has 24 physical cores and each core has 2 threads, you would need to explicitly set the parameter Threads to 48 to use all available threads, otherwise Gurobi will use 24 threads under the default setting.
- You would need to adjust the value of the Threads parameter given the explanation in the previous bullet point.
- If your supercomputer has a higher clock speed, it is very likely that the Gurobi performance would improve because the node throughput would increase. However, it would be hard to comment on whether the increased number of cores would be necessarily effective. The speedup from adding cores is limited by various factors such as fraction of time spent at the root node, number of nodes explored, the tree topology, and the load balancing. The best approach is to experiment with different values for the Threads parameter to experimentally find out the optimal value for the Threads parameter.
Changing a model formulation to make it tighter can significantly decrease the runtime to optimality, but this is not related to parallelism and the type of hardware used to solve the model.
You might find the article What hardware should I select when running Gurobi? a useful read.
Best regards,
Maliheh
0
Please sign in to leave a comment.
Comments
1 comment