Gurobi's Solvetime
AnsweredHi, I defined a linear model and then defined its objective function. After I solved it, I redefined its objective function. I found that the second solve time I derived from Runtime was much larger than the Runtime when I separate them and solved the model twice. But I am sure the larger time was not the total time of the two steps. How should I interpret it?

Hi, Jaromił
As for the difficulty we talked about, I realized that when I just set the sub_dual problem's presolve parameter as 1 in the callback function, the log result showed that the solver didn't resolve the model. I guessed that the model's parameter was set to 0 due to the lazy constraint. How should I change the parameter? (change the parameter through model.presolve was useless while resetting the model would lead to the missing of some important information.
0 
Hi Jacob,
As mentioned in my previous comment
With warm start (without reset), the original model is solved, i.e., no presolve is performed. You can change this behavior by setting the LPWarmStart parameter to 2 or 0 (equivalent to reset).
As far as I can judge the situation in your case, I would not use the reset method and would experiment with the LPWarmStart=2 setting.
0 
Hi Jaromi,
I have tried your suggestion but it didn't work. The thing I can ensure is that I didn't let the sub_dual problem be preresolved when I defined the objective function in the callback function each time. I am trying if could do this by defining different environments. The master problem and sub_dual are solved in 2 environments. I want to know does it work or if there are any examples?
0 
I find the point!
from numpy import *
import numpy as np
import pandas as pd
import gurobipy as gb
from gurobipy import *
from gurobipy import GRB
from itertools import permutations
import time
import warnings
env_1 = gb.Env()
# env_2 = gb.Env()
env_1.setParam("Presolve", 1)
m_sub = gb.read("iteration0.lp", env=env_1)
m_sub.optimize()
m_sub.setObjective(0)
m_sub.setParam("Presolve", 1)
m_sub.optimize()Even though I set the Presolve parameters, the model after I reset the objective function couldn't be presolved. It seems that this is the same situation in the general case. How should I turn on the presolved process? I don't want to reset the model since I want to use the former information.0 
As mentioned in my previous comment
The reason why each sub model solves faster with reset is that presolve reduces the size of the model and makes the overall solution process quicker. With warm start (without reset), the original model is solved (no presolve is performed). You can change this behavior by setting the LPWarmStart parameter to 2 (use warm start for presolved model  this does degrade performance in your case) or 0 (equivalent to reset).
Regarding your latest code snippet: since a warm start is available after the first optimize call, no presolve is performed (the presolve setting is ignored) and the original model is solved with warm start information. To force Gurobi to use the warm start information and perform presolve, you have to set the LPWarmStart parameter
m_sub = gb.read("iteration0.lp", env=env_1)
m_sub.optimize()
m_sub.setObjective(0)
m_sub.setParam("LPWarmStart", 2)
m_sub.optimize()Output:
[...]
Optimal objective 1.280000000e+01
Set parameter LPWarmStart to value 2
Gurobi Optimizer version 9.5.1 build v9.5.1rc2 (mac64[x86])
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 7360 rows, 1398 columns and 20840 nonzeros
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [0e+00, 0e+00]
Bounds range [3e+01, 3e+01]
RHS range [2e01, 2e01]
Presolve removed 440 rows and 110 columns
Presolve time: 0.01s
Presolved: 6920 rows, 1288 columns, 19908 nonzeros
LP warmstart: get starts with basis, then crush them
Solved in 0 iterations and 0.01 seconds (0.01 work units)
Optimal objective 0.000000000e+00Note that the LPWarmStart parameter is only available in Gurobi version 9.5.x.
Please also note that LPWarmStart does not guarantee improved performance, especially when the LP models are very small and can already be solved quickly without any presolve.
0
Please sign in to leave a comment.
Comments
35 comments