Skip to main content

How does Gurobi perform on different computer hardware?

Comments

3 comments

  • Tamas Heger

    Hi,

    I am about to solve purely binary IP models, coming from mathematical problems, with roughly n*n (n squared) variables and constraints, and each constraint having n non-zero coefficients (say, the matrix is of size 900 by 900, and each row contains 30 non-zeros). Do you think that on a model like this, Gurobi could use well the resources of a super computer (say, an HP Apollo 8000 cluster that has 1056 Sandy Bridge CPU cores, accelerated by 90 Xeon Phi coprocessors, adding 5490 more cores to be available for computations)? On a regular PC (with a 4 core CPU) I do not get a solution after a couple of weeks of  running time.

    Thanks,

     Tamás

     

    0
  • Jakob Schelbert
    Gurobi-versary
    Collaborator

    Hi Tamás,

    Your model seems to be rather small but nontheless difficult to get a feasible solution. I'm not sure whether a Xeon Phi does aid gurobi while solving but a more potent computer (compared to your desktop PC) with many cores should speed up the process.

    If I understand you correctly, gurobi is not able to provide you with a feasible solution. You could try to adjust some parameters that shift the focus towards finding solutions:

    If you have some hints for certain variables you can also provide these as (partial) solutions to gurobi which helps finding solutions using these values.

    1
  • Tamas Heger

    Hi Jakob,

    Thanks a lot for the hints! Indeed, my first main problem is finding a solution (many times the existence of a solution, even if not proven to be optimal, is quite interesting in itself for me). So far I have spent only a little bit of time playing around with the parameters you suggested for smaller but similar models, let me give some feedback.

    • Setting MIPFocus to 1 seems to be a good choice, but not always (leaving the other two parameter fixed, it did produce a slower run for some instances).
    • Heuristics is quite unpredictable: for a given model, 0.07 was far worse than the default 0.05, while 0.08 was far better, and 0.10 was again somewhat worse. Anything but linear. So I guess it is worth trying different values on parallel threads.
    • Presolve 2, however, usually resulted in a slower run (after presolving).

    If I manage to experiment with a strong multi-core computer, I will also give you a feedback.

    0

Article is closed for comments.