In the context of Gurobi, we’re typically looking at (1) performance for solving a single model (2) performance when solving multiple models concurrently. In that context:
- Vertical scaling for a single model typically refers to adding more physical cores. The ideal number of cores for solving a single model depends on your mathematical model and can’t be predicted. We’ve seen really large models that can be solved quickly using a single core and tiny models which benefit from using many cores. Adding more memory is only useful when it’s needed, in the sense that Gurobi does not change it’s behavior based on available memory but just requests whatever is needed. More cores typically increases memory requirements. Finally note that GPUs are not of added value to Gurobi.
- Vertical scaling for multiple models essentially means looking at the number of models you want to solve, multiplied by the number of cores you want to use for each. An 8-core license can be fine for solving 4 models concurrently if you restrict each model to 2 threads (using a Gurobi setting) and get good performance with that amount.
- Horizontal scaling for single model means multiple machines working together. We do provide a “distributed” option allowing you to do that, which is an add-on to our other products. Only a small number of customers benefit from this option – it’s mostly useful when vertical scaling becomes impossible while there is still benefit expected from adding cores.
- Horizontal scaling for multiple models in practice still means individual models are only solved on one machine. The question is then how to distribute the models over the machine. Of course this can be done by you (manually starting the models on the right machine, or using software, or cloud services that help with that). Gurobi provides “Compute Server” as the component that can help with that. Running a single instance on one machine, allows applications on many other machines to have their models solved by the Compute Server. By running multiple instances on separate machines, these instances will form a cluster and automatically distribute incoming models using a shared queue.
Beyond scaling, the alternatives for improving runtime are changing the mathematical model formulation and/or changing Gurobi parameter settings. Gurobi Experts can help you with this.
Comments
0 comments
Article is closed for comments.