Skip to main content

Providing the optimal solution as warm start does not improve runtime of MIP.

Answered

Comments

5 comments

  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Reza,

    I see that you are providing an optimal solution to the solver.  Termination (via MIP Gap) will also require the bound to significantly improve from the initial linear relaxation of 0.  In the solve where you provide the solution Gurobi does not know the solution is optimal so it will spend time with heuristics trying to improve it.  It may be that when it fails to improve it with simpler heuristics it starts to engage more expensive heuristics, which take away time that can be used to improve the bound.  In this particular instance you would want to try setting MIPFocus=3 to tell the solver to focus on improving the lower bound.

    However, the fact that the solve with the MIP start was worse, may just be a coincidence - never trust a comparison using a single run.  A comparison should always be made with a distribution of results (which is where our open source Python package Gurobi-logtools is useful).

    A coin analogy
    When a coin is fair, we expect that 50% of the time it will land heads up/tails down, and the other 50% of the time it will land heads down/tails up.  To perform an experiment to test if the coin is fair, of if one side will land face up more than the other, we would flip the coin many times and analyze the results together.  Flipping the coin once, and concluding that whichever way the coin landed is how it will always land would be incorrect.

    How to compare two models or solvers?
    Comparing two models with only one value of the Seed parameter is like flipping a coin once.  If you are not deliberately changing this parameter then it will have the default value of 0.  There are random actions that Gurobi performs when solving a model and these actions are based on randomly generated numbers.  It is the seed value which controls the sequence of random numbers and therefore the random behavior.  When the Seed parameter is kept constant the solver repeats the exact same sequence of actions - "the solution path" - every time it solves the same model.  When you change the value of Seed we get a different solution paths, and variety of performance.  When making comparisons between two models, or solvers, or machines, or parameter settings, it is important to do so across many values of Seed.  The more values you use the more confidence you will have in your conclusions.

    - Riley

    1
  • Reza Belbasi
    First Comment
    First Question

    Hi Riley,

    Thanks for your swift and helpful response! I experimented over a number of different parameter settings to see whether tuning these parameters would make the model having the optimal solution as warm start faster than the cold start model. In light of your suggestion and this post, I considered setting MIPFocus=3, Heuristics=0, RINS=0. Each row is reporting the average runtime of cold start vs warm start with optimal solutoin over 100 independent runs. In run number i, we fix the Seed parameter of Gurobi to i for both cold start and warm start models so that we cover different sees as well. Please find the results below:

     

    Notice that although Seed is fixed, the cold start model returns different average runtimes across different settings. The results show that setting the MIPFocus=3, Heuristic=0 or RINS=0 does not help and indeed makes the difference noticeably larger. However, I realized that one thing works (at least with my model). Rather than warm starting, providing the optimal solution as a hint to Gurobi via VarHintVal. I could not find sources on how one should tune the hint confidence parameter VarHintPri so I tested across different values of VarHintPri={0, 1, 5, 10, 100, 100}. Please find the results below on these below:

     

     

    We see that Hint with confidence 0 is giving very close averages to cold start. I am not sure how generalizable this is to other MIP problems, but a good option to keep in mind.

    Reza

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Reza,

    VarHintPri is a relative confidence. If you set VarHintPri to 1 for all variables in one experiment, then VarHintPri to 1000 for all variables in another, then in each case you are saying "I am equally confidence about these hints for all variables", but you are not saying that you have more confidence in the values where VarHintPri=1000.  Essentially this would make your 6 experiment settings with VarHintPri equivalent (up to randomness) which would make sense as I don't see a trend in either the cold or warm-opt columns.

    The results where you compare parameter MIPFocus and RINS settings are interesting and surprising.  I would have expected the cold-start runs to be much worse with Heuristics=0, and I would not have expected it to be much worse with the warm-start.  At this stage all I can do is offer to try and replicate your results (if you are willing to share an MPS model and MIP start) and explore further what is going on.

    - Riley

    0
  • Reza Belbasi
    First Comment
    First Question

    Hi Riley,

    Thanks for your response. Understood about the VarHintP parameter. I am ahppy to share the MPS file, how can I share it with you?

    Reza

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Reza,

    Easiest way is to use either Github, or a share drive such as FilemailDropboxBoxGoogle DriveOneDrive.

    - Riley

    0

Please sign in to leave a comment.