Skip to main content

Distributed MIP logging & ramp-up question

Comments

7 comments

  • Ryan Goodfellow
    Gurobi-versary
    First Comment

    Hey Gurobi team,

    Any insights into this would be helpful. We continue to have problems with our RampUp for our DistributedMIPs. If there's a way to skip this ramp-up (even for >1 machine), it would be very much appreciated. 

     

    Simon, have you noticed anything different if you have upgraded to v9.1 or higher? Our DistributedMIP performance has been truly awful.

    0
  • Simon Felix
    Gurobi-versary
    First Question
    Conversationalist

    No, unfortunately I haven't been able to do more experiments. I have only used 9.1 without DistributedMIP.

    0
  • Matthias Miltenberger
    Gurobi Staff Gurobi Staff

    Hi Simon, hi Ryan,

    I apologize for the delay.

    It seems that your instance is not very well suited for distributed MIP optimization. I suggest you try tuning your parameters for the "normal" mode instead of running this on a larger cluster of machines. 

    There is no setting to disable the ramp-up phase.

    I hope that helps.

    Cheers,
    Matthias

     

    0
  • Simon Felix
    Gurobi-versary
    First Question
    Conversationalist

    Hi Matthias

    Hmm... I think you misread my original post? In my example I ran Gurobi on a *single* node, not on a cluster.

    Best regards,
    Simon

    0
  • Ryan Goodfellow
    Gurobi-versary
    First Comment

    Hi Simon,

    A while back, Gurobi support let us know about a hidden parameter, GURO_PAR_RAMPUPNODES.

    You can set the number of nodes before it hops out of the ramp-up phase. Anecdotally, it appears that it completes (# workers * GURO_PAR_RAMPUPNODES) in the branch-and-bound tree before hopping out. Maybe you can set that to some low number with 1 worker and it will hop out of the ramp-up sooner and you don't lose as many open nodes? Just a thought - maybe it will help you out.

    I do find it interesting that you lose so many nodes after the hop-out with 1 worker. I'm curious if Gurobi can provide a more comprehensive explanation for why this is happening.

    Cheers,

    Ryan

    0
  • Matthias Miltenberger
    Gurobi Staff Gurobi Staff

    Hi!

    I really misunderstood the post - so you don't want to use "distributed" MIP since you are running only on one node. Gurobi might as well disallow a DistributedMIPJobs setting of 1. Its main use is to explicitly enable ramp-up and maybe some other techniques that distinguish this from a normal optimization. I don't understand why you would want to use the DistributedMIP mode on a single machine without ramp-up - why don't you just run a normal optimization?

    The idea of ramp-up is to avoid idle times on other machines until enough (branch-and-bound) nodes are available to distribute. So all machines initially start a racing phase with different settings or seeds until enough nodes have been generated or some other limit is reached to start the actual distributed part.

    Furthermore, I don't understand why it's a bad thing to have fewer open nodes. Usually, you want to have as few open nodes as possible.

    Cheers,
    Matthias

    0
  • Simon Felix
    Gurobi-versary
    First Question
    Conversationalist

    Thanks for the explanation. This matches my understanding. With my original post I wanted to gain a better understanding of the Distributed MIP algorithms in Gurobi. The parts which I don't quite understand are a) who the single node is racing against, and b) whose open nodes are removed?

    It's great to have fewer open nodes, but since there's only a single node racing, who produced these nodes that get removed after ramp up? If they were produced by the single node, aren't they relevant to the search? It's almost as if there were four nodes?!

    I run it this way because this works well with our academic license, and makes it very easy to develop on my small machine and rely on a beefy server for the heavy-lifting. Disabling Ramp-Up in this scenario seems sensible.

    Cheers & have a nice weekend
    Simon

    0

Please sign in to leave a comment.