VarHint where MIP Start is not applicable (with pyomo)
AnsweredDear Community,
I have a rather small (1105 continuous, 728 binary, similarily many constraints) but complex MILP problem (it is a Bilevel Optimization problem with the complementarity constraints reformulated as Big-M constraints and the objective function with three bilinear terms linearized by a binary expansion).
The entire algorithm is an iterative process, where an LP is solved in the beginning, some variable values are passed to the first BLP as parameters, of whose solution again some variable values are passed to the second BLP as parameters and so on. Since all problems inherit the same variables, whose values are similar (or at least in the same ball park), I wanted to use MIP Start. Though, due to the binary expansion, the solution of the LP is not an exact solution anymore and the MIP Start does not produce an incumbent solution. If I fix the passed variables, it is infeasible. Thus, I would like to use VarHint. My question is now how and if I can apply VarHint with Pyomo. In Pyomo, the MIP Start feature is easily accessed by setting the values of the variables. Is there a similar feature for VarHint?
Additional questions:
1.) Can I check why my B&B algorithm is slow? I have tried the measures proposed here in the forum, but nothing has really helped. It takes roughly 500 iterations per node to solve the above mentioned problem, even with aggressive Cuts and Presolve. Additionally, the best bound is almost stuck. Any ideas?
2.) Is it good practice to use explicit Big-M Formulations (with a rather good estimation of the Big-M) or are SOS or different formulations better in Gurobi?
Many thanks and cheers,
Jacob
EDIT: I've now achieved to set the VarHintVal attribute but I can't say that it solves the problem quicker. It seems to be equally slow. Is there a way to check if it a) worked to set the value and b) if it helped?
-
Official comment
This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?. -
Hi Jacob,
1.) Can I check why my B&B algorithm is slow? I have tried the measures proposed here in the forum, but nothing has really helped. It takes roughly 500 iterations per node to solve the above mentioned problem, even with aggressive Cuts and Presolve. Additionally, the best bound is almost stuck. Any ideas?
Could you post the log file? Did you have a look at the list of most important parameters? You could also try Gurobi's command line tuning tool if the problem solves within a considerable amount of time.
2.) Is it good practice to use explicit Big-M Formulations (with a rather good estimation of the Big-M) or are SOS or different formulations better in Gurobi?
Yes, if you have a good Big-M estimation then Big-M formulations tend to work very well.
EDIT: I've now achieved to set the VarHintVal attribute but I can't say that it solves the problem quicker. It seems to be equally slow. Is there a way to check if it a) worked to set the value and b) if it helped?
In addition to VarHintVal, you could also set the VarHintPri attribute to provide your confidence level for the hint. Usually, one would see Gurobi find a feasible point faster or at least change the optimization path to lean more towards the user's hints. If the hints are rather poor then the runtime may increase instead of decreasing.
Best regards,
Jaromił0 -
Hi Jaromił,
thank you for your kind answer and sorry for my late response! Please find my (preliminary) log and my selected solver options below - the algorithm is still running.
As I have described above, my problem is not too large (1853 continuous, 1154 binary variables after presolve [how come presolve added non-binary integer variables?]). The binary variables are predominantly used in Big-M linearizations of complementarity constraints, and very few because of a binary expansion of a bilinear term in the objective function. The matrix and RHS ranges are unfortunately quite large, which is due to the fact that my problem has a wide range of physical values. I have designed a constraint (row) and variable (column) scaling algorithm myself, which already reduces the Frobenius Norm from 7.8e+5 to 2.5e+3 before the problem is passed to the solver. As you can see, my initial warmstart guess is already quite good, with an initial optimality gap of only 2.72%. Unfortunately, it is only reduced to 1.97% in 2300s (not depicted below). Especially the incumbent solution is almost stuck. Judging from experience, the incumbent solution might well be the optimal solution.
I have read the tuning manuals thoroughly and tried a multitude of solver options, some with better results (Heuristics) and some with worse (NodeMethod = 2, because barrier has the lowest worst-case-speed, correct?). I have not yet tried the tuning tool, but will do so if you say it can significantly improve the solution speed and quality.
If the warmstart is used and an incumbent value is found, the VarHint procedure is probably neglected, right?
Can you give me an indication as where to look (within the realm of gurobi or my own algorithm) to improve this slow optimization? Thank you so much for your help, Jaromił!
Cheers Jacob
P.S.: an additional information I should give: I'm currently running the problem from a debugger in an IDE, so command prompt runs should be faster. But nevertheless, the current run for this model is incredibly slow.
Optimize a model with 5728 rows, 10662 columns and 15181 nonzeros
Variable types: 6354 continuous, 4308 integer (4308 binary)
Coefficient statistics:
Matrix range [7e-06, 3e+02]
Objective range [3e+01, 6e+05]
Bounds range [1e+00, 1e+00]
RHS range [5e-06, 5e+06]
User MIP start produced solution with objective 2.64582e+06 (0.05s)
Loaded user MIP start with objective 2.64582e+06
Presolve removed 2675 rows and 7652 columns
Presolve time: 0.06s
Presolved: 3053 rows, 3010 columns, 9035 nonzeros
Variable types: 1853 continuous, 1157 integer (1154 binary)
Root relaxation: objective 2.562045e+06, 1610 iterations, 0.07 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 2573833.14 0 52 2645820.22 2573833.14 2.72% - 0s
0 0 2581942.56 0 328 2645820.22 2581942.56 2.41% - 0s
0 0 2581942.56 0 328 2645820.22 2581942.56 2.41% - 0s
0 0 2582296.76 0 280 2645820.22 2582296.76 2.40% - 1s
0 0 2582673.81 0 287 2645820.22 2582673.81 2.39% - 1s
0 0 2582673.81 0 281 2645820.22 2582673.81 2.39% - 1s
0 0 2582673.81 0 171 2645820.22 2582673.81 2.39% - 1s
0 2 2582673.81 0 105 2645820.22 2582673.81 2.39% - 2s
174 175 2592923.50 20 238 2645820.22 2582679.03 2.39% 601 5s
H 266 239 2645820.2166 2582679.03 2.39% 500 6s
H 300 273 2645820.2130 2582679.03 2.39% 502 7s
360 344 2594580.54 35 343 2645820.21 2582679.03 2.39% 470 10s
501 470 2603188.40 53 148 2645820.21 2582679.03 2.39% 380 15s
768 601 2582679.03 9 396 2645820.21 2582679.03 2.39% 293 20s
1051 815 2592923.50 26 265 2645820.21 2582679.03 2.39% 280 25s
1504 1036 2586484.68 20 171 2645820.21 2582679.03 2.39% 257 30s
1515 1048 2582679.03 13 339 2645820.21 2582679.03 2.39% 265 35s
1841 1223 2591306.28 32 339 2645820.21 2582679.03 2.39% 261 40s
2252 1313 2600946.43 54 386 2645820.21 2582679.03 2.39% 255 46s
2551 1433 infeasible 80 2645820.21 2582679.03 2.39% 249 51s
H 2660 1351 2645820.1771 2582679.03 2.39% 247 52s
H 2661 1301 2645820.0226 2582679.03 2.39% 247 52s
2794 1367 2593960.31 21 464 2645820.02 2582679.03 2.39% 247 56s
3136 1450 2594509.96 23 487 2645820.02 2582679.03 2.39% 239 61s
3366 1558 2594662.61 23 428 2645820.02 2582679.03 2.39% 248 66s
3619 1579 2633159.21 22 466 2645820.02 2582679.03 2.39% 252 71s
H 3623 1531 2645820.0144 2582679.03 2.39% 252 71s
4005 1760 2587858.62 20 454 2645820.01 2582679.03 2.39% 242 76s
4400 1899 infeasible 23 2645820.01 2582694.21 2.39% 240 80s
H 4431 1899 2645819.9985 2582695.61 2.39% 240 80s
4768 2212 2603082.01 28 239 2645820.00 2582712.52 2.39% 250 85s
H 4878 2212 2645819.9604 2582712.52 2.39% 246 85s
5415 2575 infeasible 19 2645819.96 2582742.04 2.38% 247 91s
5766 2834 2585149.39 26 709 2645819.96 2582760.15 2.38% 250 95s
6072 2948 infeasible 26 2645819.96 2582794.44 2.38% 254 100s
6696 3324 2592917.88 25 290 2645819.96 2582828.05 2.38% 255 106s
7126 3711 2603082.01 40 88 2645819.96 2582996.72 2.37% 258 112s
7647 3850 2603108.07 29 267 2645819.96 2583116.49 2.37% 256 116s
8092 4019 2594362.65 28 480 2645819.96 2583207.60 2.37% 262 122s
8564 4217 2603108.07 43 134 2645819.96 2583374.40 2.36% 265 127s
H 8589 4217 2645819.9410 2583374.40 2.36% 265 127s
8726 4372 2593951.42 32 517 2645819.94 2583378.18 2.36% 267 130s
9367 4625 2603085.76 46 237 2645819.94 2583648.60 2.35% 273 136s
9928 5010 2603082.01 32 285 2645819.94 2583855.37 2.34% 278 142s
10391 5141 infeasible 27 2645819.94 2583880.07 2.34% 278 145s
11034 5323 2594144.90 22 562 2645819.94 2584242.47 2.33% 283 152s
11335 5397 infeasible 36 2645819.94 2584254.99 2.33% 286 155s
11994 5638 infeasible 30 2645819.94 2584714.21 2.31% 288 162s
12286 5748 2594171.84 32 525 2645819.94 2584796.84 2.31% 289 166s
12618 5873 2590406.62 19 475 2645819.94 2584912.50 2.30% 292 170s
13048 5984 2587930.60 32 577 2645819.94 2585199.22 2.29% 294 175s
13947 6220 2603065.89 43 289 2645819.94 2585548.97 2.28% 298 184s
14309 6279 infeasible 36 2645819.94 2585779.85 2.27% 300 189s
14708 6475 2635316.65 52 414 2645819.94 2586214.37 2.25% 301 194s
15267 6613 2593337.38 29 718 2645819.94 2586703.10 2.23% 302 199s
15729 7015 2598451.77 29 270 2645819.94 2587140.07 2.22% 303 205s
16748 7145 2603162.35 66 178 2645819.94 2587258.42 2.21% 299 210s
17268 7262 infeasible 35 2645819.94 2587832.31 2.19% 302 215s
17705 7391 2592057.79 27 636 2645819.94 2588133.20 2.18% 305 220s
18173 7633 infeasible 33 2645819.94 2588733.64 2.16% 308 226s
18893 8165 2598381.93 33 478 2645819.94 2589055.18 2.15% 309 233s
20224 8404 2592216.65 31 705 2645819.94 2589511.51 2.13% 305 239s
20892 8499 infeasible 46 2645819.94 2589810.12 2.12% 307 247s
21541 8678 2602842.26 40 377 2645819.94 2589987.17 2.11% 310 254s
22284 8934 2598198.05 43 544 2645819.94 2590753.91 2.08% 310 262s
H22565 8934 2645819.8935 2590805.03 2.08% 309 262s
23104 8974 infeasible 32 2645819.89 2591092.07 2.07% 309 270s
23534 9092 2596394.40 45 631 2645819.89 2591207.28 2.06% 311 277s
24226 9216 infeasible 30 2645819.89 2591516.99 2.05% 313 285s
25067 9696 2593571.49 41 549 2645819.89 2591900.68 2.04% 313 293s
26751 9747 infeasible 34 2645819.89 2592100.71 2.03% 308 300s
27545 9880 2601421.81 38 412 2645819.89 2592349.87 2.02% 309 308s
28377 10313 2593004.81 34 567 2645819.89 2592479.61 2.02% 310 316s
29763 10625 infeasible 46 2645819.89 2592571.73 2.01% 306 326s
30972 10851 2596557.50 42 501 2645819.89 2592656.72 2.01% 305 336s
32238 11420 2603072.30 53 225 2645819.89 2592750.51 2.01% 305 347s
34542 11501 infeasible 24 2645819.89 2592851.63 2.00% 300 355s
35370 11770 infeasible 31 2645819.89 2592864.74 2.00% 300 362s
36339 12048 2594172.75 30 553 2645819.89 2592917.88 2.00% 302 371s
37180 12360 2603082.01 35 241 2645819.89 2592917.88 2.00% 304 379s
38042 12646 2593001.50 38 504 2645819.89 2592917.88 2.00% 307 388s
39043 13195 2599312.88 36 423 2645819.89 2592917.88 2.00% 310 398s
40363 13719 2596012.04 34 489 2645819.89 2592917.88 2.00% 309 407s
41540 14904 2645294.69 56 183 2645819.89 2592917.88 2.00% 309 416s
43725 15385 infeasible 40 2645819.89 2592917.88 2.00% 304 424s
44696 16057 2593213.02 35 427 2645819.89 2592917.88 2.00% 303 434s
46036 17246 2592933.98 36 541 2645819.89 2592917.88 2.00% 302 443s
48307 17525 infeasible 56 2645819.89 2592917.88 2.00% 297 449s
48911 17827 2592917.88 34 325 2645819.89 2592917.88 2.00% 297 457s
49787 18251 2603082.01 33 239 2645819.89 2592917.88 2.00% 297 464s
50594 18754 2603082.01 50 163 2645819.89 2592917.88 2.00% 297 473s
51776 19140 2603082.01 34 263 2645819.89 2592917.88 2.00% 297 482s
52864 19902 2595948.58 39 469 2645819.89 2592917.88 2.00% 296 489s
54394 20304 2592917.88 41 418 2645819.89 2592917.88 2.00% 295 497s
55212 20703 2603082.01 45 263 2645819.89 2592917.88 2.00% 294 505s
56246 21382 2602383.23 45 329 2645819.89 2592917.88 2.00% 294 514s
57978 21666 infeasible 42 2645819.89 2592917.88 2.00% 292 523s
58902 22023 2592977.39 27 414 2645819.89 2592917.88 2.00% 293 531s
59938 23185 2599113.81 48 394 2645819.89 2592917.88 2.00% 292 542s
62432 23403 infeasible 45 2645819.89 2592917.88 2.00% 289 547s
63033 23674 2603082.01 37 214 2645819.89 2592917.88 2.00% 289 553s
63724 23973 2592917.88 37 395 2645819.89 2592917.88 2.00% 289 560s
64487 24249 2593069.69 42 456 2645819.89 2592917.88 2.00% 289 566s
65229 24575 2603162.35 41 173 2645819.89 2592917.88 2.00% 289 574s
66171 25019 2644994.17 63 152 2645819.89 2592917.88 2.00% 289 581s
67068 25713 2645758.13 74 177 2645819.89 2592917.88 2.00% 289 589s
68829 25890 infeasible 49 2645819.89 2592917.88 2.00% 287 597s
69313 26176 2592917.88 42 409 2645819.89 2592917.88 2.00% 287 605s
70209 26493 2595364.50 41 383 2645819.89 2592917.88 2.00% 287 612s
71196 26950 2593546.37 46 455 2645819.89 2592917.88 2.00% 287 621s
72299 27702 2593732.49 39 438 2645819.89 2592917.88 2.00% 287 630s
73909 28184 infeasible 47 2645819.89 2592917.88 2.00% 286 640s
74942 28557 2645809.44 79 213 2645819.89 2592917.88 2.00% 286 650s
75946 29001 2617195.83 43 209 2645819.89 2592917.88 2.00% 286 662s
77243 30177 infeasible 42 2645819.89 2592917.88 2.00% 287 674s
79852 30464 infeasible 32 2645819.89 2592917.88 2.00% 285 681s
80666 30890 infeasible 38 2645819.89 2592917.88 2.00% 285 690s
81617 31263 2645797.82 60 184 2645819.89 2592917.88 2.00% 285 698s
82442 31664 2627878.35 38 534 2645819.89 2592917.88 2.00% 285 707s
83442 31945 infeasible 35 2645819.89 2592917.88 2.00% 285 716s
84402 32458 2603082.01 44 273 2645819.89 2592917.88 2.00% 286 725s
Solver Options:
"ObjScale": -0.5,
"LogToConsole": 1,
"NumericFocus": 2,
"ScaleFlag": 2,
"NodeMethod": 1,
"InfUnbdInfo": 1,
"DualReductions": 1,
"Heuristics": 0.2,
"Threads": 80 -
Hi Jacob,
As I have described above, my problem is not too large
Please note that the complexity of a model cannot be measured by its size. There are still small problems in the MIPLib2017 which are still unsolved or cannot be solved quickly.
how come presolve added non-binary integer variables?
What do you mean exactly? The number of variables after presolve is lower
Variable types: 6354 continuous, 4308 integer (4308 binary)
[...]
Variable types: 1853 continuous, 1157 integer (1154 binary)NodeMethod = 2, because barrier has the lowest worst-case-speed, correct?
Yes, however it does not profit from warm-start in the B&B tree.
If the warmstart is used and an incumbent value is found, the VarHint procedure is probably neglected, right?
No, the VarHint values are still used and may significantly change the optimization path. If finding a feasible solution is not a problem, you might as well omit using VarHint attributes.
Can you give me an indication as where to look (within the realm of gurobi or my own algorithm) to improve this slow optimization?
Given the fact that the lower bound "gets stuck" at \(2592917.88\) for quite some time, it might be worth to try searching for a better feasible point. You could try the parameter NoRelHeurTime=300 to let the no relaxation heuristic run for 300 seconds. It will try to improve the feasible point provided by your MIP start before solving the root node relaxation. For now, you should also try avoid setting the parameters ObjScale, NumericFocus, and ScaleFlag as they may negatively affect performance and should rather only be used when you suspect that your problem suffers from numerical issues. While large rhs values are not ideal, they are not as bad as large matrix and objective coefficients and variable bounds. You could also experiment with the MIPFocus parameter.
Best regards,
Jaromił0 -
Dear Jaromił,
thank you once again for your quick and extensive answer - brilliant community support here!
What do you mean exactly? The number of variables after presolve is lower
I was wondering why my problem has 1157 integer variables but only 1154 binary variables after presolve, even though all my integer variables in the original model were binary. Thus, presolve added three non-binary integer variables. Though, this was more an observation than an actual question.
No, the VarHint values are still used and may significantly change the optimization path. If finding a feasible solution is not a problem, you might as well omit using VarHint attributes.
That is good to know, thank you!
For now, you should also try avoid setting the parameters ObjScale, NumericFocus, and ScaleFlag as they may negatively affect performance and should rather only be used when you suspect that your problem suffers from numerical issues.
That is an important point that I didn't know. I thought these parameters would generally improve the solution and thus performance.
Thanks again, Jaromił!
Best wishes
Jacob0 -
Hi Jacob,
I was wondering why my problem has 1157 integer variables but only 1154 binary variables after presolve, even though all my integer variables in the original model were binary. Thus, presolve added three non-binary integer variables. Though, this was more an observation than an actual question.
I didn't notice this, thank you for pointing out. Gurobi detects specific structure and reformulates the problem in a way it thinks, that it is most beneficial for the solution process. Aggregation of variables and constraints may lead to the introduction of integer variables by condensing multiple binaries into one integer variable, or by exploiting specific structure which requires the introduction of integer variables.
In a case where you would add general constraints or quadratic terms to your problem, you could observe that Gurobi adds variables and constraints to reformulate some of the general constraints and/or quadratic terms. This means, that in extreme cases, the presolved model might have more variables and constraints than the original one.
thank you once again for your quick and extensive answer - brilliant community support here!
Thank you for your kind words! Any feedback is always highly appreciated!
Best regards,
Jaromił0 -
Hi Jaromił,
interesting fact about the Presolve algorithm!
As you have recommended, I have executed the Tuning Tool, with few new hints (only setting DegenMoves, a parameter I had not previously heard about). Currently my solution is stuck at a 1.62% gap with the exact same Best Bd and Incumbent Solution for 1400 out of 3000 seconds and 460,000 out of 540,000 explored nodes. Interestingly, my provided initial solution is still (with very little improvement) the current incumbent solution. Is there a feature (maybe by changing BranchDir?) to rather search in the vicinity of the incumbent solution to close the gap more quickly?
Maybe on a more abstract note, I have the feeling that my multitude of binary variables in the Big-M constraints slows my problem incredibly down. Do you have any experience with other approaches to complementarity constraints in Gurobi (SOS, regularization, penalty, ...)? If this leads too far away, I absolutely understand if this exceeds the purpose of this forum!
Thank you again for your brilliant help,
Jacob0 -
Hi Jacob,
If you have the feeling that the dual bound is the problem in this case, you can try setting
RINS=0
Heuristics=0.001
MIPFocus=3
BranchDir=1
VarBranch=3This will focus heavily on the improvement of the dual bound while almost completely disabling feasible point heuristics. You could further experiment with the different settings for BranchDir and VarBranch.
Maybe on a more abstract note, I have the feeling that my multitude of binary variables in the Big-M constraints slows my problem incredibly down. Do you have any experience with other approaches to complementarity constraints in Gurobi (SOS, regularization, penalty, ...)?
Big-M formulations work well as long as the Big-M value is chosen carefully, i.e., the values are rather tight. In general, one can say the same for the other approaches you listed. They all may work well in one case or another. If the parameter tuning approach is not successful, you should consider improving the coefficient ranges and also the rhs range. A different approach would be to increase the feasibility tolerances and see whether this helps.
Best regards,
Jaromił0 -
Dear Jaromił,
thank you once again for your help! I will try this approach!
Best wishes
Jacob0
Post is closed for comments.
Comments
9 comments