automatic scaling of a problem in advance, before calling the solver to solve it
AnsweredHi,
We have a model representing the chemical reactions in a cell that we use for various MILP and LP problems, typically to obtain metabolic fluxes (variables) through chemical reactions. An issue with the model is that there is a large variance in the variable values (many orders of magnitudes), leading to numerical errors and other problems. I think this could be solved, or at least improved, by scaling the rows and columns of the problem matrix. I have understood that this is a difficult problem. But, in our case, we would only have to do this once, since the matrix (representing chemical reactions in cells) really doesn't change that much between runs. I understand this is difficult to do for each run. But is there a way to run this once, and get out good scaling factors for rows and columns? It doesn't matter if this takes three weeks to run?
Best,
Johan
-
Hi Johan,
No, there is no way of exporting the scaling factors in Gurobi. Scaling is actually pretty cheap - especially when compared to solving the instance, so I don't think you would benefit from skipping this step.
Also, scaling cannot remove all numerical issues in a model. If your variables appear with different magnitudes you should try reformulating your model, instead. Maybe a two-step approach could work, separating the model into two (or even more) parts.
In any case, you should review our Guidelines for Numerical Issues.
I hope that helps.
Cheers,
Matthias0 -
Ok, thanks, I investigated this page. It didn't directly help me, but I think I may have understood my problem a bit better. I think what I would want to do is the following:
1. I would like to take part of the problem (my metabolic model), and scale all rows and columns to minimize the variation in magnitude across rows and across columns.
2. I would then add some more variables and constraints, such as that some reactions need to have a flux > 0.1, which means that a variable has a value > 0.1. I would add a bunch of similar constraints, some including a mix of boolean and continuous variables.
The problem is that I need to have fixed limits for reactions (variables), such as > 0.1 - but if I haven't done the scaling in step 1, this value would have to be different for different reactions (variables) to avoid numerical issues, and those values are difficult to figure out. So, my question is: Is there a good algorithm for minimizing the variation in magnitude of the variable values by applying scaling? I realize I could in theory do this manually, but since the matrix is 5000x8000, it is a bit challenging. It doesn't have to be perfect, just improve the current situation.0 -
Hi Johan,
Apologies for the delay. It's an interesting idea. Do you mean like an automated tool for advanced user-scaling? If so, I have been playing around with an idea that could be useful for this. Do you have any models (mps / lp files) you could share with me that highlight the bad numerics?
The security of your model and data is important to us. The safest way to send the models is through the upload page on the Gurobi web site:- Go to https://www.gurobi.com
- Login to your Gurobi account
- From the Support menu, select "Upload"
The Gurobi support team will be notified when a file has been uploaded. The files are stored on a secure server. Your files will be accessible only by Gurobi support, developers, and IT staff. We have been trained to handle these files in a secure way.0 -
Hi Steven and thanks for responding! I have solved my problems by now :). What I did was to brutally change my model by simply change things such as x1 + 0.000001 x2 to something like x1 + 0.01 x2 - it doesn't really matter for the specific case I am solving. It made a huge difference on performance - of all the tricks I applied, this was one of the most effective. So from my perspective, I don't need this anymore, that project is now finished. But if you are interested in developing something as part of the product then perhaps I could help by supplying some models. Let me know in that case. But, think through the use case. What I was asking for was an algorithm, which I intended to implement myself in the layer above Gurobi. The idea was to do something once in my template model in that layer that would speed things up when running Gurobi, but that template model is not a gurobi model, but something else. The template model is used together with other data (various constraints etc.) to generate a Gurobi model. I have for example solved > 20,000 problems based on the template model, but for each of those I sent in a unique Gurobi model. So, my thought was to run this scaling optimization once on the template model, which would speed up the > 20,000 runs in Gurobi - it doesn't matter so much if that one run is a bit slow. So in my case, I would then need to transform my template model into a Gurobi model somehow, run this, and get the scaling back, so I could apply it to my template.
So, don't do this for me, but if you think it is a good feature in Gurobi, go ahead :)
0
Please sign in to leave a comment.
Comments
4 comments