Skip to main content

Barrier algorithm; what kind of matrices can it handle the best (sparse or dense, large or small problems)?

Comments

3 comments

  • Official comment
    Simranjit Kaur
    Gurobi Staff Gurobi Staff
    This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?.
  • Yuriy Zinchenko
    Gurobi Staff Gurobi Staff

    Hello Soufyan:

    these are very general questions that can be answered by learning about the interior-point methods (or barrier algorithms) from a course, so maybe the support forum is not quite the best place for it.  I will try to answer some though.

     

    >> For example small,  the fastest for small, dense problems and large, sparse problems.

    Generally, small sparse problems will be faster than large dense ones.

     

    >> Does the algorithm need a lot of iterations to converge or less iterations, but each iteration is more expensive, in general?

    As compared to simplex, where the method would generally require lots of relatively inexpensive iterations, the barrier will require much fewer iteration but each one of those is much more expensive.

    >> Dozens of expensive iterations: Why many iterations and expensive at both time

    See above.

    >> Much denser matrices: Why denser matrices? I thought Gurobi could handle sparse matrices better?

    This probably refers to the internal structures in the solver, meaning that it will operate on denser (intermediate) matrices at every iteration.

    >> Lots of opportunities to exploit parallelism: What is parallelism?

    Parallelism refers to using several CPU cores.

     

    Hope this helps.

    0
  • Soufyan Zayou
    Gurobi-versary
    Conversationalist
    Curious

    Hi,

    As compared to simplex, where the method would generally require lots of relatively inexpensive iterations, the barrier will require much fewer iteration but each one of those is much more expensive.

    And compared to Matlab's interior-point-convex? Gurobi and quadprog gave the same objective function value, but what is the relation between the convergence and cost of iterations? And with GPAD?

    This probably refers to the internal structures in the solver, meaning that it will operate on denser (intermediate) matrices at every iteration.

    Were the internal structures of the solver not sparse, which leads to the fact that Gurobi could handle sparse matrices very good in comparison to for example Matlab's quadprog. The computation times of quadprog is for a sparse QP problem much higher compared to solving a sparse QP problem in Gurobi.

    0

Post is closed for comments.