Hi! I am using Gurobi to solve the MaxCut problem on a particular class of graphs. Maxcut is a maximization problem defined on graphs, where each vertex is a binary variable and each edge is a cost function term. Number of vertices in graph is its size N. I use only 3-regular graphs of different sizes. The cost function is quadratic with 1.5*N terms, and each variable participates in exactly 3 terms.
I observe strange behavior of gurobi optimization for large-sized problems. At some size N, increasing problem size improves performance, despite the problem being more complex. The optimality gap between bounds drops faster for larger N.
For instance, N1 = 7600 and N2 = 10000 and 100 seconds time gaps are 0.17 and 0.11 respectively. This behavior is observed in most of random instances of the problem. I published a jupyter notebook in colab for more detailed explanation.
I also did an experiment with 30 random graphs and plotted gap vs time for them.
You can see that for N=7600 at t=500 seconds half of problems have gap>0.16, while for N=1000 at same time half of graphs have gap<0.085. In general, looks like Gurobi finds good solutions much faster for N=10000 than N=7600
Is there any explanation to this behavior? Does Gurobi switch heuristics internally depending on the problem size?
Please sign in to leave a comment.