Why Distributed Optimization?
Distributed Optimization can be used for:
- Distributed Concurrent: Allows you to use multiple machines to solve an LP or MIP model. Each machine uses a different strategy to solve the same problem, with the hope that one strategy will be particularly effective and will finish much earlier than the others.
- Distributed MIP: Allows you to divide the work of solving a single MIP model among multiple machines. A manager machine passes problem data to a set of worker machines to coordinate the overall solution process.
- Distributed Parameter Tuning: If you are using a Gurobi Compute Server, you can harness the power of multiple machines to perform distributed parallel tuning in order to speed up the search for effective parameter settings.
Distributed Concurrent
With Distributed Concurrent, the approach is to solve the same problem on multiple machines simultaneously but with different settings.
This approach is particularly helpful if there is huge performance variability. By default, Gurobi will use a different random seed for each run -- which oftentimes produces enough variability. You can also specify other parameter settings if you have additional knowledge.
Since the concurrent solves are independent, the communication overhead is negligible. Thus using a distributed concurrent approach will never hurt performance. Therefore, if you have extra machines, there is always a good chance that you will get some benefit from using Distributed Concurrent. The additional benefit may tail off with adding more and more machines, as the main effect for speed-up (performance variability) is usually exploited well enough with a few machines.
Distributed MIP
With Distributed MIP, the approach is to split the work of solving one big MIP among multiple machines.
In general, parallelism for MIP is beneficial only in a few situations -- even on a single machine. Based on our extensive experience, our general sense is that beyond 8 (or maybe 16) threads, the speedups are not significantly better. In fact, sometimes too many threads can even be detrimental due to the overhead incurred by synchronization. This sense also agrees with Amdahl's law regarding the theoretical speedup when using multiple processors.
When splitting the work between multiple machines, benefits are even harder to achieve because the communication overhead is more significant due to latency. Often, it is not beneficial to use lots of workers. Whenever possible, it is better to perform the optimization on one big machine instead of many small ones.
However, the distributed MIP approach can work well for models that produce very large but relatively shallow trees. With these models, the number of open (unexplored) nodes with good dual bounds becomes very large during the optimization.
On the other hand, if the MIP tree search is a "dive", parallelism won't help much because only one thread can process a single dive, while each of the other threads need to process their own dives. If the MIP is essentially solved by just following a single dive to the end until an optimal or very good solution is found at a leaf node, then processing additional dives using additional threads is not going to help.
How does distributed MIP work?
When solving a model with the distributed MIP approach, Gurobi begins with a "ramp-up" phase.
In this phase, each distributed worker solves the problem concurrently.
At some point, Gurobi selects the "best" branch-and-bound tree that has been produced by these workers.
This search tree is used as the starting point for the distributed phase. From that point, the different machines each explore different parts of the tree and synchronize their progress.
Distributed Tuning
Gurobi includes a parameter tuning tool that automatically searches for parameter settings that improve optimization performance on a model or a set of models. This tuning tool has proven to be both quite effective and extremely popular. One downside of the tuning tool is the time it can take. Automated tuning performs many solves on your model, each using different parameter settings. By default, this search for improved parameter settings takes roughly 10 times as long as solving the model (and it is often beneficial to let the tuning tool run for much longer). The distributed tuning tool uses multiple, independent machines to perform tuning, thus allowing you to explore more candidate parameter settings in the same amount of time.
Limitations
- Distributed Optimization is not fully supported for MIQCP.
- Callbacks are limited in Concurrent Optimizers. For specifics, please see Concurrent Optimizer.
- Distributed Optimization works best if all of the workers have very similar performance. For example, if one machine in your worker pool were much slower than the others in a distributed tuning run, any parameter sets tested on the slower machine would appear to be less effective than if they were run on a faster machine. Similar considerations apply for Distributed MIP and Distributed Concurrent. We strongly recommend that you use machines with very similar performance.
-
A compute server node can only run a single distributed job at once. You would need several compute server nodes to run several distributed jobs. For more information, see Distributed Workers and the Distributed Manager:
Note that we only allow a machine to act as manager for a single distributed job. If you want to run multiple distributed jobs simultaneously, you'll need multiple manager machines.
Licensing
- You need a license that has distributed enabled on the manager machine in order to use Distributed MIP or Distributed Concurrent. For Distributed Tuning, you don't need a special license.
- On Gurobi Instant Cloud, Distributed Optimization is always enabled.
- The distributed workers do not require any license.
Further information
- Distributed Algorithms overview
- Gurobi Remote Services > Distributed Algorithms
- Distributed Algorithm Considerations
- Concurrent Optimizer
Comments
0 comments
Please sign in to leave a comment.