Some Gurobi-powered applications solve large batches of models with very short individual model runtimes. Taking special care with API usage and parameter tuning is sometimes overlooked because the model runtimes are short per model. Across a large batch of models and over many days of operation per year, wall clock time can accumulate, and it might be worth taking a closer look if some small changes can reduce solve times and model building times to ensure that the available hardware and licenses are utilized in the best way possible.

## API usage

* Recycle Gurobi environments* Gurobi environments and models have a 1:n relationship. Defining a Gurobi environment takes some time, which can be spared when reusing the same environment for multiple model builds. Taking our Gurobi API gurobipy as an example:

import gurobipy as gp

# fast

env = gp.Env()

for i in range(10000):

model = gp.Model(env=env)

model.optimize()

# slow

for i in range(10000):

env = gp.Env()

model = gp.Model(env=env)

model.optimize()

* Review model building code* Gurobi’s native APIs allow efficient model building; some performance-related aspects to keep in mind are explained in the article How do I improve the time to build my model?

* Consider reusing model objects *When solving many models in sequence, with each model having a similar structure or just differing by a few matrix coefficients, you can consider reusing the same model and updating a small share of coefficients using model.chgCoeff(). This will also automatically try to use existing information from the previous optimization to warm-start the next optimization. If models differ quite a bit, it might be more efficient to build models from scratch with a reused environment. When solving the same model for different and independent input data, you might like our Multi-Scenario feature.

* Use code profilers *Code profilers are very useful tools for identifying potential bottlenecks in model-building time for all pre- and post-processing steps involved.

* Dispose your environments* Don’t forget to dispose your environment after its last use.

## Parameter tuning

For models of similar structure, it is sometimes possible to identify algorithms of the solver that are more effective than others on your specific set of models. In these cases, it can make sense to turn off some solver algorithms or increase the intensity of others. If parameter settings are chosen appropriately, more CPU capacity becomes available for the most effective algorithms of the Gurobi Optimizer, which can help to work through a batch of models more quickly. For example, the Method parameter can often be fixed to Primal Simplex, Dual Simplex, or the Barrier algorithm if you identified which algorithm typically wins the race in the concurrent mode, which is the default behavior. Settings for Presolve, Heuristics, and Cuts are also worth investigating when tuning parameters for models with short runtimes; sometimes, turning them off can be beneficial. We can help you identify the most efficient parameter settings if you provide us with representative model files.

## Parallelization

Solving multiple Gurobi models at the same time and on the same machine is a powerful lever for solving batches of models quickly. Parallel solving means running several Gurobi processes at the same time, where each process has its own Gurobi environment and model defined; environments cannot be shared across processes or threads. Solving models in parallel can be done by parallel calls to the Gurobi Command Line Interface (gurobi_cl), tools like Python multiprocessing, and many alternatives. Note that your ability to solve models in parallel efficiently is limited by your hardware, mostly the number of available (and licensed) cores.

To get the most out of your available hardware, it makes sense to benchmark solve time for different Threads settings per individual model solve. When solving batches of models with short runtimes, using relatively few or even just one thread per model and solving more models in parallel can be an effective way to work through a large batch of models as quickly as possible. A good starting point for hardware utilization is to keep the number of parallel model solves multiplied by the Threads used per model equal to the number of physical CPU cores of the underlying machine – assuming that no other processes require significant CPU capacity simultaneously. Like with other parameter settings, we can help you identify the most efficient Threads parameter settings if you upload representative model files.

## Containerization

Containerized environments for optimization applications are becoming more popular due to their flexibility, reproducibility, and maintainability. There are many ways to handle batch workloads using modern cloud building blocks, and the priorities differ between applications. Often one option is to split the work into many really small workloads, which can be attractive for several reasons, including ease of implementation or hardware costs. However, starting and stopping many containers can introduce some overhead. Regarding pure CPU wall-time needed to solve a batch of optimization models, it might be worth keeping one (or a few) container(s) alive, which are responsible for solving models sequentially or in parallel.

This article touches on the surface of considerations for batch-solving and containerized environments. Our team of Technical Account Managers is happy to discuss the architecture of any Gurobi-powered application.

## Comments

0 comments

Article is closed for comments.