It is difficult to predict how Gurobi performs on a certain machine. In general, the solver benefits from high CPU speeds and low-latency, high-bandwidth memory. You can consult this benchmark to compare different CPUs with respect to single-thread performance.

Having multiple cores at your disposal can improve performance, but this is highly problem-dependent.

This is also true for the amount of memory (RAM). If the model cannot be solved without exceeding physical memory, this will have a strong negative impact on performance. Even small models can require a lot of memory when the search requires a large MIP tree. In such cases, the NodeFileStart parameter can be used to write compressed node information to disk and free some memory. More channels, e.g., DDR4, increase the data throughput and are preferred over single-channel RAM. Memory benchmarks can be found here.

Whenever possible, we recommend benchmarking to determine real performance.

**A general guideline:** if you are solving a large MIP in parallel, it is best to use a system with the fastest possible clock rate, using the fastest available memory, with as many fully populated memory channels as are available. Current Intel Xeon systems support up to six channels per CPU, while current AMD EPYC systems support up to eight. Desktop and low-end server configurations typically have fewer channels.

### Further information

- How do I avoid an out-of-memory condition?
- Why does Gurobi perform differently on different machines?
- What hardware should I select when running Gurobi?
- Is there a way to predict how long it will take Gurobi to solve a MIP?

## Comments

3 comments

Hi Tamás,

Your model seems to be rather small but nontheless difficult to get a feasible solution. I'm not sure whether a Xeon Phi does aid gurobi while solving but a more potent computer (compared to your desktop PC) with many cores should speed up the process.

If I understand you correctly, gurobi is not able to provide you with a feasible solution. You could try to adjust some parameters that shift the focus towards finding solutions:

If you have some hints for certain variables you can also provide these as (partial) solutions to gurobi which helps finding solutions using these values.

Hi Jakob,

Thanks a lot for the hints! Indeed, my first main problem is finding a solution (many times the existence of a solution, even if not proven to be optimal, is quite interesting in itself for me). So far I have spent only a little bit of time playing around with the parameters you suggested for smaller but similar models, let me give some feedback.

If I manage to experiment with a strong multi-core computer, I will also give you a feedback.

Hi,

I am about to solve purely binary IP models, coming from mathematical problems, with roughly n*n (n squared) variables and constraints, and each constraint having n non-zero coefficients (say, the matrix is of size 900 by 900, and each row contains 30 non-zeros). Do you think that on a model like this, Gurobi could use well the resources of a super computer (say, an HP Apollo 8000 cluster that has 1056 Sandy Bridge CPU cores, accelerated by 90 Xeon Phi coprocessors, adding 5490 more cores to be available for computations)? On a regular PC (with a 4 core CPU) I do not get a solution after a couple of weeks of running time.

Thanks,

Tamás

Article is closed for comments.