Solving LPs sequentially versus one large LP with non-interacting variables/constraints
AnsweredSuppose I'm solving ~1000 linear programs with ~10 variables and ~100 constraints each. Is it faster to solve these sequentially (creating a new model each time) or to create a single large linear program with components that don't interact (i.e. the components corresponding to each original linear program) and solve it once? Does the latter benefit from any sort of parallelism? Is there a point (in terms of the size of the problem) where one becomes faster than the other?
-
There is no general answer to this. It really depends on your model. If you're not using a far away compute server, the overhead of creating and optimizing many small models should be very small. But in the end, the only way to really find out which method is faster for your model is trying and comparing both.
1 -
Hi Carlos,
That's an interesting question. Usually, it's best to supply the solver with all the additional information you have about the model. In this case, it would be that it actually decomposes into many small problems and solve them all individually. Of course, you can easily parallelize this.
If you just put everything into one model, at best, the solver would detect this and might try to deal with it accordingly, at worst, you end up solving a much larger problem. You will most likely not end up with better performance - even if the solver would detect the components perfectly.
For very easy problems this might be different (e.g. when model creation overhead dominates total solving time) or when communication overhead is a point to consider. If you need to transfer some model data many times to a distant server, you can benefit from putting it all into one model.
It's hard to give an estimate on which approach works best for a general problem, because this heavily depends on the instance itself.
Cheers,
Matthias0 -
Model creation overhead is exactly what I'm concerned about, since I'm trying to solve a very large batch of small-to-medium-sized LPs. For example, I'm wondering about the overhead of calling GRBnewmodel and GRBoptimize several times as compared to making a single large model once and solving it.
0
Please sign in to leave a comment.
Comments
3 comments