Gurobi MILP branch-and-cut algorithm
AnsweredHi, gurobi community,
I am trying yo solve a MILP problem with gurobipy. But my model is too large with millions vars and constraints. When solving the problem, I notified that the memory is running out of, and the program exits. Due to the large memory and long solving time, I want to change the search algorithm a bit. I want to change the solve algorithm to a approximate algorithm, which is only solve the K branch nodes with the largest up-bound. The algorithm is as following:
unexp_nodes = priority_queue()
unexp_nodes.enqueue(root_node)
while unexp_nodes is not empty:
node = unexp_nodes.dequeue()
node1, node2 = branch(node)
solve(node1) # get the best up-bound
solve(node2) # get the best up-bound
unexp_nodes.enqueue(node1)
unexp_nodes.enqueue(node2)
I know this algorithm is an approximate algorithm, and I will set a initial solution, the solution I need is a solution better than initial solution with limited time and memory (At the same time, the solution should be as good as possible).
-
Official comment
This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?. -
Hi,
I am not sure how you want to reduce the memory requirement with the algorithm you proposed. Unless you solve a simplified version of the problem in each node, the memory requirements should stay the same.
The Knowledge Base article How do I avoid an out-of-memory condition? might help you in this case.
Best regards,
Jaromił0 -
Hi,
Sorry for my late reply.
With the algorithm, I only need to keep K+2 nodes at most at the same time, which can reduce most of the nodes. I think, this can reduce memory (I am not sure if my understanding is wrong).
Besides, do I have any programming methods to implement my algorithm with gurobipy?
Thanks a lot.
Jiangfei
0 -
Hi Jiangfei,
If the reason for the out-of-memory state is a too large number of open nodes, then your algorithm might help. In case that you have some disc space available, you could try setting the NodefileStart parameter first, which will force Gurobi to start writing nodes to a file on the local disc once a certain memory threshold has been reached. If however, you run out of memory in the root-node already, then the algorithm will very likely not be helpful.
Besides, do I have any programming methods to implement my algorithm with gurobipy?
You can work with \(\texttt{gurobipy}\) and its objects the same way as with other Python modules. This means that you can use other modules such as, e.g., \(\texttt{numpy}\), and the algorithm implemented within. You would probably like to work with multiple Model() objects associated with the same Environment() object.
Best regards,
Jaromił1 -
Hi, Jaromił,
Thanks a lot.
You can work with gurobipy and its objects the same way as with other Python modules. This means that you can use other modules such as, e.g., numpy, and the algorithm implemented within. You would probably like to work with multiple Model() objects associated with the same Environment() object.
You mean, I should write the algorithm and the branch function by myself? If so, what about the performance? I need keep the nodes and select the branches, and I think the performance maybe worse.
Best regards,
Jiangfei0 -
Hi Jiangfei,
You mean, I should write the algorithm and the branch function by myself?
Yes, or you can search for open-source implementations. I am not aware of any Python module which could fit your algorithm.
If so, what about the performance? I need keep the nodes and select the branches, and I think the performance maybe worse.
The performance may be worse than running Gurobi with enough memory. This is the reason why I mentioned the NodefileStart parameter and the Knowledge Base article on avoiding out-of-memory conditions in my previous messages.
Best regards,
Jaromił0
Post is closed for comments.
Comments
6 comments