As I have a two-stage model, it is divided into primal model (master problem) and dual problem (dual sub problem). I just added model.Params.OptimalityTol = 1e-03 and model.Params.Method = 0 to the dual sub problem and added model.Params.FeasibilityTol = 1e-03 and modelParams.IntFeasTol = 1e-03 to the master problem. The Coefficient statistics:
Matrix range     [2e-04, 3e+02]
Objective range  [2e-01, 6e+00]
Bounds range     [1e+00, 1e+00]
RHS range        [1e+00, 3e+03]

• Gurobi Staff

Hi Arezo,

If you are solving the same model twice with the exact same input and parameters, it should result in the exact same solution.

How are you forcing the model to have the exact same solution when you solve it for the second time? How much the objective function is different? You might want to explicitly evaluate the objective function on the solution obtained to verify the accuracy of the objective function.

The default feasibility and optimality tolerances are 1e-6 and the default IntFeasTol is 1e-5. Since the smallest matrix coefficient value is 2e-4, it does not make sense to set the feasibility and optimality tolerances to a value greater than the smallest meaningful value in the model. Therefore, I recommend sticking to the default values for tolerances and do not change them unless there is a very good reason.

Best regards,

Maliheh

Hi Maliheh,

Thank you so much for your response. Yes, I solved the model twice with the exact same input and parameters but the objective function is not the same. As you advised, I kept the default values for the tolerances. The results of the model include decision variables such as Xs, sigma, gamma, and Z. I forced the model to get the exact values for Xs and solved the model for the second time.  I forced the model to get the same values for Xs  as follows: (X is a binary variable)

X["supp1"].lb = 1 ( if I want to force it to be 1)

X["supp2"].ub = 0 ( if I want to force it to be zero)

I added them before model.optimize(). It means that I get an initial solution to the model (the second run) and it solved it in fewer iterations but the objective function is different (I expect to get the same objective function which is optimal).

Also, I noticed that for small problems the difference between objective functions is insignificant but I need to solve the model for big problems and the difference is significant. The first run - objective function is 5.4570532468224155e+02 and the second run is  5.3258073730958711e+02.

Is it the correct way to force the model?

Thank you!

• Gurobi Staff

Hi Arezo,

It seems to me that you are only fixing the values of the variables $$X$$ and not the other variables such as $$\sigma$$, $$\gamma$$, and $$Z$$.

What you are doing is correct with respect to fixing the variables $$X$$. The presolve will fix the variables $$X$$ and will continue to find the optimal values for the other decision variables. The Gurobi Optimizer is a floating-based MIP solver with round-off error and a solution is feasible/optimal within tolerances. For example, the value of variable $$x_i$$ that is fixed to be exactly 1 in the second model, can be any value between $$1 \pm 1e-5$$ in the first run. Furthermore, there is a default MIPGap of 1e-4.

I would suggest to write the solution of the second model into a file using the method Model.write("solution.sol") and then evaluate the objective function directly on the solution values. If it evaluates to the same value obtained by the Gurobi Optimizer and it is also within the default MIPGap from the best bound, it means what you see is just a consequence of errors in numerical calculation. You can also compare the values that the decision variables take in the first and in the second runs to ensure their values do not differ more than the default tolerances.

That being said, the difference between the two objective functions you posted is larger than what I would have expect to see as a numerical error. What MIPGap value are you using?

Best regards,

Maliheh

Thank you for your help. I really appreciate that. Yes, the presolve fixes the variables X and continues finding the optimal values for the other decision variables. I also used model.write("solution.sol") but I did not calculate the objective function manually to compare with the values obtained by the Gurobi Optimizer. But it is obvious that the difference between the two objectives is large. I did not set MIPGap and I think it has the default value.

Also, the optimal values for other decision variables are different (it's significant) from the first model. The values of X are the same as I forced them to get the exact same values. Also, the termination condition for the benders decomposition method is "Upper bound - Lower bound > 10e^-4".

• Gurobi Staff

Could you send us a minimal reproducible example such that we can investigate further what it might be causing this behaviour? You can share it with us through Google Drive, Dropbox, or OneDrive.

Maliheh

Hi Malileh,

Thanks for your help. I created a folder on GoogleDrive. The link is here:

Thank you!

• Gurobi Staff

Hi Arezoo,

Thanks for sharing this. It seems I do not have permission to access the folder. I sent an email requesting for the access.

Best regards,

Maliheh

Hi

sorry for that.
I accepted it.
Please let me know if you have any trouble accessing that.

• Gurobi Staff

Hi Arezoo,

I looked at your code.

- When I ran it for the first time, I got 554.062 and it took 38 iterations.

- In the $$\texttt{MasterProblem.optimization()}$$ method, at each iteration you are over-writing the model and the solution of the master problem in files named $$\texttt{MP.lp}$$ and $$\texttt{MP.sol}$$, respectively.

- You then fix the values of variables $$X$$ and solve the model again. This solution is the optimal solution of the last-iteration master problem with all added cuts. Using this solution as a start for the first-iteration master problem is not going to have any impact. Doing this, I again got 554.062 in 38 iterations.

- To test that starting from the optimal solution of the last-iteration master problem in the first run results in the same objective value in just one iteration in the next run, your $$\texttt{MasterProblem.optimization()}$$ method should change to:

self.MP.update()# Read the last-iteration master problem model in and add the optimality cut# constraints to the self.MP modelmodel = gb.read("MP.lp")for c in model.getConstrs():    if "optimality_cut" in c.ConstrName:       row, rhs = model.getRow(c), c.RHS       new_row = 0       for i in range(row.size()):           var, coeff = row.getVar(i), row.getCoeff(i)           # The constraint should be represented with respect to the           # variables in the self.MP model           new_row += coeff * self.MP.getVarByName(var.VarName)       self.MP.addConstr(new_row >= rhs, name=c.ConstrName)# Read the solution of the last-iteration master problem in to be used# as a MIP startself.MP.read("MP.sol")self.MP.optimize()
I did run this and I got the exact same value 554.062 in just one iteration.

I hope this makes things clear.

Best regards,
Maliheh
Thanks for your response.
It seems that you read all decision variables from "MP.sol" and add them to the first iteration in the Master problem. But I force the model to get similar values only for Xs and not other decision variables (i.e., gamma, sigma, and Zs). Then I get different objective values in the second run. Am I wrong to assume that I should get similar optimal values by fixing only Xs?

Also, I noticed that when I solve the model on two different computers, I get different optimal values, 550.9982 and 561.8061 (31 iterations) but you got 554.062 in 38 iterations. Do you know why is that?
• Gurobi Staff

Hi Arezoo,

Then I get different objective values in the second run. Am I wrong to assume that I should get similar optimal values by fixing only Xs?

No, starting from any initial solution either partial or complete should not change the optimal solution. You can use the variable attribute Start to fix a subset of variables to starting values.

Also, I noticed that when I solve the model on two different computers, I get different optimal values, 550.9982 and 561.8061 (31 iterations) but you got 554.062 in 38 iterations. Do you know why is that?

Running the same problem on different hardware definitely comes with performance variability. Please check the article on Why does Gurobi perform differently on different machines? for more details. The solution path might change but the optimal solution should not change. You might want to first check there is no error in your code constructing the benders cuts. Another possibility is that since you are solving a number of master and sub-problems iteratively to the default non-zero MIPGap of 1e-4, the numerical error just adds up to such a large value. It might be worth experimenting with tighter MIPGap values to see whether the difference shrinks or not.

Best regards,

Maliheh

Hi Maliheh,

Thank you so much for your response. It was super helpful.

When I use the variable attribute Start to fix a subset of variables to starting values, I get the same objective value.

I noticed that when I solved an example, in the last iteration lower bound was greater than the upper bound and the tolerance was negative.  It does not happen for the different problems but just in some cases. Do you know why is that?

• Gurobi Staff

Hi Arezoo,

I noticed that when I solved an example, in the last iteration lower bound was greater than the upper bound and the tolerance was negative.  It does not happen for the different problems but just in some cases. Do you know why is that?

This means to me that your master problem says there is no better solution than $$x$$ and your sub-problem finds a feasible solution better than $$x$$. I would suggest checking the validity of the cuts you are adding to the master problem to ensure they do not remove any feasible solution. One other idea is to solve the master problem using the solution of the sub-problem, which is better than the bound, as a start and see whether the solver results in infeasibility or not. If yes, you can then find the IIS (using the method Model.computeIIS()), write it into a file, and see what has gone wrong.

Best regards,

Maliheh

Hi Maliheh,

I appreciate your help.

What do you mean by using the solution of the subproblem and solving the master problem?

It is unclear because the decision variables in the subproblem differ from the Master problem. I use the solution of the subproblem to create feasibility/optimality cuts and add them to the Master problem.

Also, could you please help me check the cut's validity? I just checked the coding of the cuts and thought it was fine. For adding cuts to the master problem, I used "model.addConstrs(...)". Do you have any suggestions for checking the validity of cuts?

• Gurobi Staff

Hi Arezoo,

Sorry for the delay!

It is unclear because the decision variables in the subproblem differ from the Master problem. I use the solution of the subproblem to create feasibility/optimality cuts and add them to the Master problem.

I meant adding the generated cut to the master problem and solving the master problem again. At the end, you have a complete solution containing values for decision variables in the master problem as well as the decision variables in the sub-problem. One other idea is to check this solution in the global model and see whether the global model is infeasible or not.

To check the validity of your cuts, two conditions need to be proved (see Section 3.1 in the paper Generating Benders Cuts for a General Class of Integer Programming Problems)

1) At any iteration, the cut removes the current solution in the master problem if it is infeasible.

2) At any iteration, the cut does not remove any globally optimal solution.

Best regards,

Maliheh