Skip to main content

Solving LPs sequentially versus one large LP with non-interacting variables/constraints




  • Silke Horn
    Gurobi Staff Gurobi Staff

    There is no general answer to this. It really depends on your model. If you're not using a far away compute server, the overhead of creating and optimizing many small models should be very small. But in the end, the only way to really find out which method is faster for your model is trying and comparing both.

  • Matthias Miltenberger
    Gurobi Staff Gurobi Staff

    Hi Carlos,

    That's an interesting question. Usually, it's best to supply the solver with all the additional information you have about the model. In this case, it would be that it actually decomposes into many small problems and solve them all individually. Of course, you can easily parallelize this.

    If you just put everything into one model, at best, the solver would detect this and might try to deal with it accordingly, at worst, you end up solving a much larger problem. You will most likely not end up with better performance - even if the solver would detect the components perfectly.

    For very easy problems this might be different (e.g. when model creation overhead dominates total solving time) or when communication overhead is a point to consider. If you need to transfer some model data many times to a distant server, you can benefit from putting it all into one model.

    It's hard to give an estimate on which approach works best for a general problem, because this heavily depends on the instance itself.


  • Carlos Martin
    First Question

    Model creation overhead is exactly what I'm concerned about, since I'm trying to solve a very large batch of small-to-medium-sized LPs. For example, I'm wondering about the overhead of calling GRBnewmodel and GRBoptimize several times as compared to making a single large model once and solving it.


Please sign in to leave a comment.

Powered by Zendesk