Matrix multiplication in objective function. "TypeError: 'MLinExpr' object is not subscriptable"
回答済みHello,
I'm trying to solve this mixed integer problem and running into issues in specifying my objective function. My code is below.
#set the objective function
model.setObjective(sum((dtheta[i] - d0_f_hat[i]) * (dtheta[i] - d0_f_hat[i]) for i in range(num_edges)) +
sum(((d1 @ dtheta)[i] - (k * period_arr)[i]) * ((d1 @ dtheta)[i] - (k * period_arr)[i])
for i in range (num_faces))
,GRB.MINIMIZE)
I'm trying to minimize the sum of two L-2 norms. For clarity, dtheta is an MVar that has dimension num_edges by 1, d0_f_hat is a numpy array that has dimensions num_edges by 1. d1 is a scipy sparse matrix that has dimension num_faces by num_edges. k is an MVar that has dimensions numfaces by 1 and period_arr is a numpy array that has dimensions num_faces by 1. The first line in the objective function works, I've solved many problems with it and the solutions look good. However, now our problem has changed and we would like to include the second line in our minimization. d1 @ dtheta should have dimensions num_faces by 1. But, whenever I try to access the i'th element (to minimize the L-2 norm), I get the following error.
TypeError: 'MLinExpr' object is not subscriptable
This is coming from the second line in the objective function, when I am multiplying d1 with dtheta.
I was hoping someone in the community ran into/fixed this issue?
Thanks,
Rahul
-
Hi Rahul,
The result of
d1 @ dtheta
is an MLinExpr which is not an array or list of any kind and thus, not subscriptable. You could try
d1[i] * dtheta[i]
You say, that you are trying to implement the L2 norm. You should definitely try the addGenConstrNorm method, which was implemented with v9.5.0. Please not that this method accepts only Var objects, and not MVar or LinExpr objects. This should not be a problem, since one can always introduce auxiliary variables and equality constraints, e.g.,
m.addConstr(auxVar1 == x+y)
m.addConstr(auxVar2 == w+z)
# add constraint auxVar3 = L2Norm(auxVar1, auxVar2)
m.addGenConstrNorm(auxVar3, [auxVar1, auxVar2], 2.0)Best regards,
Jaromił0 -
Hi Jaromil,
Thanks for your response. I solved the issue by using auxiliary constraints. In particular, I set
model.addConstr(diff_1 == mvar_a - mvar_b)
and then, in my objective, I minimize
diff_1 @ diff_1
as a way of minimizing the L-2 norm squared. This works and the results look good. However, I had a question for you though. My problems are big with numerous integer variables. I noticed a massive performance drop off when I set the norm in an auxiliary constraint or use addGenConstrNorm(). The method I have described above, using diff_1 @ diff_1 has given me the best performance. Is there any standard way of setting L-2 norms (or their squares) in the objective function that will maximize performance?
Thanks once again.
Rahul
0 -
Hi Rahul,
The method I have described above, using diff_1 @ diff_1 has given me the best performance. Is there any standard way of setting L-2 norms (or their squares) in the objective function that will maximize performance?
That's interesting. Could you share a minimal working example showing the performance difference in construction when using diff_1 @ diff_1 vs addGenConstrNorm? Usually, the @ operator should be faster, but the performance difference should not be too great when compared to the addGenConstrNorm method.
Best regards,
Jaromił0 -
Hi Jaromil,
Sure thing, it's hard for me to share a small example as when the model is small, they both perform the same, as expected. It's when the model scales that I really notice the difference. Here is the log file using the @ operator. My objective is minimize (diff_1 @ diff_1)
Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (mac64[x86])
Thread count: 2 physical cores, 4 logical processors, using up to 4 threads
Optimize a model with 11711 rows, 15137 columns and 37200 nonzeros
Model fingerprint: 0x638785ea
Model has 5815 quadratic objective terms
Model has 2 general constraints
Variable types: 12827 continuous, 2310 integer (0 binary)
Coefficient statistics:
Matrix range [1e-01, 1e+00]
Objective range [0e+00, 0e+00]
QObjective range [2e+00, 2e+00]
Bounds range [0e+00, 0e+00]
RHS range [4e-07, 3e+00]
Presolve removed 9400 rows and 9402 columns
Presolve time: 0.04s
Presolved: 2311 rows, 5735 columns, 11470 nonzeros
Presolved model has 3425 quadratic objective terms
Variable types: 3425 continuous, 2310 integer (0 binary)
Found heuristic solution: objective 0.0029997
Root relaxation: objective 8.123175e-06, 4621 iterations, 0.02 seconds (0.01 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.00001 0 2310 0.00300 0.00001 100% - 0s
0 0 0.00206 0 534 0.00300 0.00206 31.3% - 0s
0 0 0.00206 0 534 0.00300 0.00206 31.3% - 0s
Interrupt request received
0 2 0.00206 0 536 0.00300 0.00206 31.3% - 0s
926 80 0.00227 498 272 0.00300 0.00223 25.8% 6.3 5s
1679 183 0.00270 960 81 0.00300 0.00223 25.8% 6.8 10s
Explored 2024 nodes (20151 simplex iterations) in 12.73 seconds (13.19 work units)
Thread count was 4 (of 4 available processors)
Solution count 1: 0.00299972
Optimal solution found (tolerance 1.00e-04)
Best objective 2.999717312872e-03, best bound 2.999717312872e-03, gap 0.0000%In this instance, diff_1 and diff_2 are declared as I described above. Here is the same model but using the addGenConstrNorm method.
Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (mac64[x86])
Thread count: 2 physical cores, 4 logical processors, using up to 4 threads
Optimize a model with 11711 rows, 15139 columns and 37200 nonzeros
Model fingerprint: 0x07d75ba8
Model has 2 general constraints
Variable types: 12829 continuous, 2310 integer (0 binary)
Coefficient statistics:
Matrix range [1e-01, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [0e+00, 0e+00]
RHS range [4e-07, 3e+00]
Interrupt request received
Presolve removed 9400 rows and 9403 columns
Presolve time: 0.05s
Presolved: 5739 rows, 12589 columns, 21750 nonzeros
Presolved model has 3426 quadratic constraint(s)
Variable types: 10279 continuous, 2310 integer (0 binary)
Root relaxation: objective 2.850118e-03, 1 iterations, 0.01 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.00285 0 2310 - 0.00285 - - 0s
0 0 0.00285 0 2310 - 0.00285 - - 0s
0 0 0.00285 0 2310 - 0.00285 - - 0s
0 0 0.00285 0 2310 - 0.00285 - - 0s
H 0 0 0.0547698 0.00285 94.8% - 0s
0 1 0.00285 0 2310 0.05477 0.00285 94.8% - 0s
1537 1503 0.01808 294 603 0.05477 0.00285 94.8% 3.8 7s
1540 1505 0.01061 651 921 0.05477 0.00562 89.7% 3.8 10s
1551 1513 0.01861 236 2222 0.05477 0.00951 82.6% 3.8 15s
1562 1520 0.01145 8 2231 0.05477 0.01145 79.1% 3.7 20s
1571 1526 0.01209 49 2224 0.05477 0.01209 77.9% 3.7 25s
1590 1540 0.01246 32 2310 0.05477 0.01246 77.2% 57.7 30s
1593 1542 0.01808 575 1182 0.05477 0.01246 77.2% 57.6 36s
1600 1547 0.01246 276 2216 0.05477 0.01246 77.2% 57.4 40s
1611 1554 0.01246 275 2216 0.05477 0.01246 77.2% 57.0 45s
1643 1567 0.02548 28 2208 0.05477 0.01246 77.2% 80.0 50s
1982 1711 0.02548 54 2182 0.05477 0.01246 77.2% 85.0 55s
2033 1709 0.02656 60 2192 0.05477 0.01246 77.2% 96.6 60s
2236 1794 0.04729 75 1856 0.05477 0.01246 77.2% 107 65s
2308 1803 0.02659 82 2170 0.05477 0.01246 77.2% 110 70s
2579 1942 0.02676 105 2147 0.05477 0.01246 77.2% 117 75s
3142 2317 0.02696 173 2078 0.05477 0.01246 77.2% 109 80s
3713 2538 0.04736 231 2006 0.05477 0.01246 77.2% 99.0 85s
4298 2977 0.02707 299 1951 0.05477 0.01246 77.2% 93.3 90s
4803 3203 0.02707 345 1904 0.05477 0.01246 77.2% 88.7 95s
5288 3553 0.03557 394 1856 0.05477 0.01246 77.2% 84.9 100s
5630 3694 0.02707 435 1814 0.05477 0.01246 77.2% 84.3 107s
Cutting planes:
Gomory: 45
Implied bound: 1
MIR: 1165
Explored 5901 nodes (500719 simplex iterations) in 109.58 seconds (103.75 work units)
Thread count was 4 (of 4 available processors)
Solution count 1: 0.0547698I killed it because it was making slow progress. Here, my objective is minimize diff_1_norm where diff_1_norm is defined as
model.addGenConstrNorm(diff_1_norm, diff_1, 2.0)
The difference in performance is definitely interesting but there are a couple of things to note. In the former example, I'm minimizing an L-2 norm. In the latter example, I'm minimizing an L-1 norm. This is mostly because I couldn't find a better way to specify the L-2 norm of diff_1 without using auxilliary quadratic equality constraints, thereby making my problem non-convex and even slower in my experiments.
In addition to your thoughts on how to best maximimize performance, I would also like to know if you have any suggestions for specifying L-2 norm squared or if the @ operator is the best way to go?
Best,
Rahul
0 -
Hi Rahul,
Just to clarify, the performance difference is not during the construction of the model but it is during the optimization process, correct?
In the former example, I'm minimizing an L-2 norm. In the latter example, I'm minimizing an L-1 norm. This is mostly because I couldn't find a better way to specify the L-2 norm of diff_1 without using auxilliary quadratic equality constraints, thereby making my problem non-convex and even slower in my experiments.
The L2 norm can be the better choice, because it is convex. Additionally, you can see that the presolved model of the L2 norm run much smaller than of the L1 norm run. On top comes the nonconvexity you mentioned. Thus, the performance difference makes sense.
In addition to your thoughts on how to best maximimize performance, I would also like to know if you have any suggestions for specifying L-2 norm squared or if the @ operator is the best way to go?
If you generate the exact same model for both cases, there should be no difference in performance. If you however observe big performance differences, could you please write both models and share them?
Best regards,
Jaromił0 -
Hi Jaromil,
I think your comments regarding the non-convexity and pre-solve make sense. They are not the exact same model, as one of them is an L-1 norm minimization and the other is a L-2 norm minimization, so the difference in performance makes sense. Thanks for your correspondence!
Best,
Rahul
0
サインインしてコメントを残してください。
コメント
6件のコメント