Gurobi enables users to manually start an optimization process with a given solution, which can save a lot of time as the best objective starts better than it would have started otherwise. Why would it not be possible to do the exact same with the best bound if one knows a good value for it ?
Let me illustrate with an example : I've been doing tests with the travelling salesman problem, using the Danzig-Fulkerson-Johnson formulation on which I iteratelively added subtour elimination constraints until I found the best solution with no subtour (this is, I believe, the best known way to solve TSP through MIP).
After consulting the gurobi logs, I noticed that a significant part of the computations was just about getting the best bound value from its LP relaxation initializated value to the faster found best objective. Yet, after each solution with a subtour obtained, I know for sure that the best "actual" solution will have a higher objective value than this "wrong" solution, so I could retrieve the value of the subtour-including solution and use it as a best bound for the next search to start with a lower gap. Being unable to do this - apparently simple - operation is very frustrative.
Note : One could just brutally add an inequality constraint on the objective expression of the model, but it does not necessarily reduce the computation time, and in my case it increased it.
If this feature is actually already present, then I apologize, and I would really like to know where to find it since my searches on the web and in the reference manual were not fruitful.
Thank you very much !
Please sign in to leave a comment.