Imagine that you run your model with default parameters, on a solver, and it finds an optimal solution in 30 seconds. Now imagine that you make a small change to your model, such as reordering the variables or constraints, and the solver then takes twice as long to solve. This scenario can happen and the phenomenon is known as performance variability. Performance variability is a result of the solver taking different solution paths - which itself is a consequence of random processes.
Other small changes, such as modifying a coefficient or constraint RHS, or running on a different machine, can also produce different solution paths and as a result a change in performance. In the table below we show an example using glass4.mps (a model distributed with the Gurobi installation). To create each different version of the model we simply take the first constraint and move it to the end. Note how changing the Seed has a similar affect of producing variance.
Models made by reordering constraints Same model, different value of Seed
| Model | Seed | Runtime | | Model | Seed | Runtime |
|:--------------|-------:|----------:| |:-----------|-------:|----------:|
| glass4.mps | 0 | 19.58 | | glass4.mps | 0 | 19.51 |
| glass4_v2.mps | 0 | 10.72 | | glass4.mps | 1 | 23.1 |
| glass4_v3.mps | 0 | 17.58 | | glass4.mps | 2 | 8.79 |
| glass4_v4.mps | 0 | 4.00 | | glass4.mps | 3 | 47.46 |
| glass4_v5.mps | 0 | 45.77 | | glass4.mps | 4 | 19.87 |
Developing an understanding of how the solver could respond to similar models is essential to both managing performance expectations, but also making fair and accurate comparisons between parameter settings, hardware, and between solvers (including different versions of the same solver).
When making such comparisons it is important to produce a distribution of results by doing one of two things:
1) Run the solver on many similar instances of the same model. This is easy if you can easily generate problem instances but this is not typical for real world problems.
2) Run the solver many times on the same model (or a small set of models) changing the value of Seed each time. Changing the value of Seed is enough to cause performance variance.
If the results of 2) demonstrate a lot of inconsistency, then we have high-performance variability, and seeking parameter settings that reduce the variability could be a good idea, as would examining the model for known causes of variability such as numerical issues.
For a deeper dive into performance variability please refer to:
Lodi, Andrea, and Andrea Tramontani.
"Performance Variability in Mixed-Integer Programming."
2013 Tutorials in Operations Research-Theory Driven by Influential Applications. INFORMS, 2013
Comments
0 comments
Article is closed for comments.