Gurobi seems not able to solve an SOCP problem to optimal
AnsweredI have the following second order cone problem as following ()
I used
model.write("myfile_test.lp")
to output the model as following:
\ LP format - for model browsing. Use MPS format to capture full model detail.
Minimize
d[0] + c[0] + c[1] + c[2] + c[3] + c[4] + c[5] + c[6] + c[7]
Subject To
R0: - x[0,0] + x[1,0] + w[0,0] = 0
R1: - x[0,1] + x[1,1] + w[0,1] = 0
R2: d[0] - de[0] >= 0
R3: x[0,0] + u[0,0] = 0
R4: x[0,1] + u[0,1] = 2
R5: c[0] - 2.23606797749979 y[0] - z[0] >= -2.23606797749979
R6: x[1,0] + u[1,0] = 0
R7: x[1,1] + u[1,1] = 2
R8: c[1] - 2.23606797749979 y[1] - z[1] >= -2.23606797749979
R9: x[0,0] + u[2,0] = 1
R10: x[0,1] + u[2,1] = 2
R11: c[2] - 2.23606797749979 y[2] - z[2] >= -2.23606797749979
R12: x[1,0] + u[3,0] = 1
R13: x[1,1] + u[3,1] = 2
R14: c[3] - 2.23606797749979 y[3] - z[3] >= -2.23606797749979
R15: x[0,0] + u[4,0] = 0
R16: x[0,1] + u[4,1] = 0
R17: c[4] - 2.23606797749979 y[4] - z[4] >= -2.23606797749979
R18: x[1,0] + u[5,0] = 0
R19: x[1,1] + u[5,1] = 0
R20: c[5] - 2.23606797749979 y[5] - z[5] >= -2.23606797749979
R21: x[0,0] + u[6,0] = 1
R22: x[0,1] + u[6,1] = 0
R23: c[6] - 2.23606797749979 y[6] - z[6] >= -2.23606797749979
R24: x[1,0] + u[7,0] = 1
R25: x[1,1] + u[7,1] = 0
R26: c[7] - 2.23606797749979 y[7] - z[7] >= -2.23606797749979
R27: y[0] + y[1] = 1
R28: y[2] + y[3] = 1
R29: y[4] + y[5] = 1
R30: y[6] + y[7] = 1
R31: y[0] + y[2] + y[4] + y[6] = 2
R32: y[1] + y[3] + y[5] + y[7] = 2
Bounds
y[0] <= 1
y[1] <= 1
y[2] <= 1
y[3] <= 1
y[4] <= 1
y[5] <= 1
y[6] <= 1
y[7] <= 1
General Constraints
GC0: de[0] = NORM ( 2 ) ( w[0,0] , w[0,1] )
GC1: z[0] = NORM ( 2 ) ( u[0,0] , u[0,1] )
GC2: z[1] = NORM ( 2 ) ( u[1,0] , u[1,1] )
GC3: z[2] = NORM ( 2 ) ( u[2,0] , u[2,1] )
GC4: z[3] = NORM ( 2 ) ( u[3,0] , u[3,1] )
GC5: z[4] = NORM ( 2 ) ( u[4,0] , u[4,1] )
GC6: z[5] = NORM ( 2 ) ( u[5,0] , u[5,1] )
GC7: z[6] = NORM ( 2 ) ( u[6,0] , u[6,1] )
GC8: z[7] = NORM ( 2 ) ( u[7,0] , u[7,1] )
End
There are some linear constraints and some second order cone constraints at the bottom.
Also there are some non-negativity constraints which gurobi doesn't output, so in addition,
c >=0, y>=0, d>=0, de>= 0, z>=0.
Gurobi doesn't complain and solved it, the optimal solution is
x = array([[0.0, 0.0], [0.0, 0.0]])
y = array(
[0.89442719, 0.10557281, 0.65835921, 0.34164079, 0.0, 1.0, 0.4472136, 0.5527864]
)
with an optimal value being 4
However, the correct optimal value should be 0, with
x = array([[0.5, 1.0], [0.5, 1.0]])
y = array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])
In fact, I got the optimal value by cvxpy using gurobi sovler.
Any reason why?
The output is
Optimize a model with 33 rows, 48 columns and 80 nonzeros
Model fingerprint: 0xd21fb064
Model has 9 general constraints
Variable types: 48 continuous, 0 integer (0 binary)
Coefficient statistics:
Matrix range [1e+00, 2e+00]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e+00, 2e+00]
Presolve removed 33 rows and 48 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Explored 0 nodes (0 simplex iterations) in 0.00 seconds (0.00 work units)
Thread count was 1 (of 4 available processors)
Solution count 1: 4
Optimal solution found (tolerance 1.00e-04)
Best objective 4.000000000000e+00, best bound 4.000000000000e+00, gap 0.0000%
-
Hi Fei Wang,
The solution
x = array([[0.5, 1. ],
[0.5, 1. ]])
y = array([0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5])is not feasible for the LP model you provided unless the \(u\) variables can take negative values.
For instance, if \(x[0,0] = 0.5\), then the constraint \(x[0,0]+u[0,0] = 0\), implies \(u[0,0]\) must be \(-0.5\). However, since the \(u\) variables have no negative lower bounds specified in the LP model, they are considered non-negative by default.
Can you please verify if you are solving the same model and obtaining different optimal solutions?
Regards,
Simran
1 -
Hi Simran,
Thank you very much! It solved my problem. I didn't know the default lower bound is zero,
1 -
Thanks Simran,
It works. However in my code self.r is only 2.
So this changes won't save a lot of time, in fact, the majority of the time is spent in adding those SOCP constraints.
Constraints as the following are taking a lot of time
model.addConstr(de[E_index_dict[(i, j)]] == gp.norm(w[E_index_dict[(i, j)], :], 2))
1 -
I have another question:
I found that adding the constraints to the model is the most expensive part of the code
So it is this command model.addConstr() that is costing 90% of the timeSince I am just adding some linear constraints and SOCP constraints, why adding them to the model cost so much time?
0 -
Hi Fei Wang,
Can you please share an example code showing how you build your model?
The article How do I improve the time to build the model may be helpful for you. It contains several tips to speed up model building time.
Regards,
Simran
0 -
Hi Simran,
Here is part of the code that’s very expensive. Most of the time is spent on adding the constraints especially the first and second for loops
model = gp.Model()
model.Params.LogToConsole = 0
x = model.addMVar((len(self.steiner_points), self.r), lb=-GRB.INFINITY, vtype=GRB.CONTINUOUS, name="x")
d = model.addVars(len(E), lb=0, vtype=GRB.CONTINUOUS, name="d")
de = model.addVars(len(E), lb=0, vtype=GRB.CONTINUOUS, name="de")
c = model.addMVar(len(self.ET), lb=0, vtype=GRB.CONTINUOUS, name="c")
y = model.addMVar(len(self.ET), lb=0, ub=1, vtype=GRB.CONTINUOUS, name="y")
z = model.addVars(len(self.ET), lb=0, vtype=GRB.CONTINUOUS, name="z")
# ze = model.addVars(len(self.ET), lb=0, vtype=GRB.CONTINUOUS, name="z")
w = model.addMVar((len(E), self.r), lb=-GRB.INFINITY,vtype=GRB.CONTINUOUS, name="w")
u = model.addMVar((len(self.ET), self.r), lb=-GRB.INFINITY,vtype=GRB.CONTINUOUS, name="u")
obj = gp.quicksum(d[i] for i in range(len(E))) + gp.quicksum(c[i] for i in range(len(self.ET)))
model.setObjective(obj, GRB.MINIMIZE)
start_time = time.time()
for (i, j) in E:
if i in self.terminal_indices_label_added_dict:
label_terminal_i = self.steiner_tree.get_label(i)
for k in range(self.r):
model.addConstr(
w[E_index_dict[(i, j)], k] == self.terminal_label_coord_dict[label_terminal_i][k]
- x[Steiner_points_index_dict[j], k])
model.addConstr(de[E_index_dict[(i, j)]] == gp.norm(w[E_index_dict[(i, j)], :], 2))
model.addConstr(d[E_index_dict[(i, j)]] >= de[E_index_dict[(i, j)]])
else:
for k in range(self.r):
model.addConstr(w[E_index_dict[(i, j)], k] == x[Steiner_points_index_dict[i], k] - x[
Steiner_points_index_dict[j], k])
model.addConstr(de[E_index_dict[(i, j)]] == gp.norm(w[E_index_dict[(i, j)], :], 2))
model.addConstr(d[E_index_dict[(i, j)]] >= de[E_index_dict[(i, j)]])
for (i, j) in self.ET:
label_terminal_i = self.terminals_index_label_not_added_dict[i]
M_i = self.M[label_terminal_i]
# model.addConstr(z[ET_index_dict[(i, j)]] == c[ET_index_dict[(i, j)]] + M_i * (1 - y[ET_index_dict[(i, j)]]))
for k in range(self.r):
model.addConstr(u[ET_index_dict[(i, j)], k] == self.terminal_label_coord_dict[label_terminal_i][k]
- x[Steiner_points_index_dict[j], k])
model.addConstr(z[ET_index_dict[(i, j)]] ==
gp.norm(u[ET_index_dict[(i, j)], :], 2))
model.addConstr(c[ET_index_dict[(i, j)]] >=
z[ET_index_dict[(i, j)]] - M_i * (1 - y[ET_index_dict[(i, j)]]))
for i in self.terminal_indices_not_added:
model.addConstr(gp.quicksum(y[ET_index_dict[(i, j)]] for j in self.steiner_points_degree_less_3) == 1)
for j in self.steiner_points_degree_less_3:
degree_j = len(self.steiner_tree.nodes[j]['neighbors'])
model.addConstr(gp.quicksum(y[ET_index_dict[(i, j)]] for i in self.terminal_indices_not_added)
== 3 - degree_j)
time_solve += time.time() - start_time
model.optimize()0 -
Hi Fei Wang,
The majority of the time spent in adding constraints to your model is perhaps consumed in the for loops like the one below:
for k in range(self.r): model.addConstr( w[E_index_dict[(i, j)], k] == self.terminal_label_coord_dict[label_terminal_i][k]- x[Steiner_points_index_dict[j], k])
Instead of looping over k, you can exploit the MVar nature of the variables and replace the above with the equivalent following constraint:
model.addConstr( w[E_index_dict[(i, j)]] == terminal_label_coord_mat[label_terminal_i] - x[Steiner_points_index_dict[j]])
w[E_index_dict[(i, j)] and x[Steiner_points_index_dict[j] are k dimensional arrays of variables, and this will add one constraint for each value of k.Please note that for the above constraint to work, you would need to convert the dictionary terminal_label_coord_dict to a 2D matrix terminal_label_coord_mat appropriately.Could you please make the equivalent changes for all for loops and test if this helps?Regards,Simran0 -
Hi Fei Wang,
Indeed, my previous suggestion will help save time for larger values of self.r.
With respect to the SOCP constraints, you can try adding them using the addGenConstrNorm method. It is slightly faster than using gp.norm in my test below:
import gurobipy as gp
import time as time
n = 100000
time_solve = 0
model = gp.Model()
de = model.addMVar(n, name = "de")
w = model.addMVar((n, 2), name = "w")
# test gp.norm
start_time = time.time()
for i in range(n):
s = model.addConstr(de[i] == gp.norm(w[i,:], 2))
time_solve = time.time() - start_time
print("time_solve_1", time_solve)
# test GenConstrNorm
start_time = time.time()
for i in range(n):
m = model.addGenConstrNorm(de[i], w[i], 2)
time_solve = time.time() - start_time
print("time_solve_2", time_solve)Regards,
Simran
0
Please sign in to leave a comment.
Comments
8 comments