メインコンテンツへスキップ

Why is using Indicator Constraints in Gurobi significantly slower than the Big-M method

回答済み

コメント

3件のコメント

  • Riley Clement
    Gurobi Staff Gurobi Staff

    Hi Yuan,

    What do your Big-M constraints look like?

    - Riley

    0
  • Yuan Jiawei Yuan Jiawei
    Gurobi-versary
    Curious
    First Comment

    My Big-M constraints are defined as follows:

    model.AddConstr(f <= mu +  rho * pi+100000 * (1-Sigma[i, j, z]), "c10");
    model.AddConstr(f<= 100000 * sigma, "c11");

    model.AddConstr(f >= mu +  rho * pi -100000 * (1-Sigma[i, j, z]), "c11");

    I have tested different values for big M (1000, 10,000, and 100,000), and all of them yield the same objective value.

     

    0
  • Riley Clement
    Gurobi Staff Gurobi Staff

    There could be a few explanations:

    i) Is this a fair comparison?

    If you want to compare two models then you cannot rely on a comparison using 1 run of each (maybe you're not, I do not know).  To make a sound conclusion over which model is better you need to run the models several times with different values of Seed, and then compare the resulting distributions.  See How can I make accurate comparisons? 

    ii) The indicator constraints cannot be linearized

    The constraint `f <= 100000 * sigma` tells the solver that f has an upper bound of 100000.  Similarly, the constraint `f <= mu +  rho * pi+100000 * (1-Sigma[i, j, z])` tells the solver that `f - mu - rho*pi` has an upper bound of 100000.  The indicator constraints do not convey this same information.  When the solve begins, indicator constraints are translated to SOS1 constraints.  The SOS1 constraints may then be possibly translated to big M constraints.  If the solver can deduce reasonable big M values then we will have big M constraints - this is more efficient.  If the big M values would be too large then the SOS1 constraints are not linearized in order to avoid numerical issues.  What is considered too large is controlled by the PreSOS1BigM parameter. 

    iii) An unlucky choice of SOS1 encoding

    There are several ways that SOS1 constraints can be translated into linear constraints with big M values.  This can be controlled by the PreSOS1Encoding parameter but it's default lets the solver choose.  It may be that the solver is translating the SOS1 constraints, but choosing a translation (i.e. encoding) that is not the best.

     

    If you do an accurate comparison between the models and find that your big M formulation is actually better then I would make sure you have good bounds on your variables and repeat the experiment.  This will give the solver a better chance of translating the indicator constraints into linear constraints (via SOS1).

    There is also a relevant discussion on or.stackexchange here:

    When to use indicator constraints versus big-M approaches in solving (mixed-)integer programs

    - Riley

     

    0

サインインしてコメントを残してください。