Root relaxation single thread
OngoingHello!
I am solving what I believe to be hard problem, the main focus is to find all the possible solutions (nodes) that when turned off make it impossible to transmit a message. I am interested in different node lengths, but as length increases the problem explodes in combinatorial possibilities. I was thinking of adding more computing power but noticed that as time passes in relaxation gurobi starts consuming less resources an even appears to become single thread.
I leave my configuration:
Gurobi 10.0.1 (win64, Python) logging started Thu May 11 08:16:48 2023
Set parameter LogToConsole to value 0
Set parameter Cutoff to value 7
Set parameter TimeLimit to value 100000
Set parameter MIPFocus to value 3
Set parameter LogFile to value "logs/gurobi_model_7.log2023_05_11-08_16_48.txt"
Set parameter Presolve to value 1
Set parameter PreSOS1Encoding to value 2
Set parameter PoolSolutions to value 10000
Set parameter PoolSearchMode to value 1
Set parameter PoolGap to value 0.01
Gurobi Optimizer version 10.0.1 build v10.0.1rc0 (win64)
CPU model: Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz, instruction set [SSE2|AVX|AVX2]
Thread count: 8 physical cores, 16 logical processors, using up to 16 threads
Am I wrong to assume that more computing power won’t solve the problem faster?
Can I use another strategy or reformulation is the only way?
Thanks in advance!
Carlos
-
Hi Carlos,
It is possible that Gurobi may not be able to utilize all cores at some stages of the optimization, e.g., if it is waiting for a thread running a heuristic to find a feasible solution in the root node before syncing but this usually does not have a significant impact on the overall performance.
Could you share the model statistics? If the log of the solution process is not too long, could you please share it? If it is too long, you can just cut out parts of the B&B part.
Am I wrong to assume that more computing power won’t solve the problem faster?
It MAY help but there is certainly no guarantee.
Can I use another strategy or reformulation is the only way?
The parameters you set are very specific. Did you find those by experimenting with smaller models?
Usually, a reformulation helps best. However, it may be very hard to find a good reformulation if there is one.
Best regards,
Jaromił0 -
Hi Jaromil,
CPU model: Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz, instruction set [SSE2|AVX|AVX2]
Thread count: 8 physical cores, 16 logical processors, using up to 16 threads
Optimize a model with 22681 rows, 12088 columns and 114905 nonzeros
Model fingerprint: 0x5db751cb
Model has 5258 general constraints
Variable types: 9459 continuous, 2629 integer (2629 binary)
Coefficient statistics:
Matrix range [1e-04, 2e+04]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 1e+00]
RHS range [1e-03, 4e+00]
GenCon rhs range [1e+00, 1e+00]
GenCon coe range [1e+00, 1e+00]
MIP start from previous solve did not produce a new incumbent solution
MIP start from previous solve violates constraint e346ca64-23f9-4127-bc37-d8302b3f2787_MultipleSol by 1.000000000
Presolve removed 8066 rows and 5659 columns
Presolve time: 0.60s
Presolved: 14615 rows, 6429 columns, 68022 nonzeros
Presolved model has 2199 SOS constraint(s)
Variable types: 4230 continuous, 2199 integer (2199 binary)
Root relaxation: objective 0.000000e+00, 1860 iterations, 0.05 seconds (0.07 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.00000 0 3 - 0.00000 - - 0s
0 0 0.00000 0 3 - 0.00000 - - 1s
0 0 0.00000 0 3 - 0.00000 - - 1s
0 0 0.00000 0 3 - 0.00000 - - 1s
0 2 0.00000 0 3 - 0.00000 - - 1s
119 137 0.00000 14 131 - 0.00000 - 83.0 5s
724 767 0.00000 51 112 - 0.00000 - 66.5 10s
1466 1454 2.00000 61 0 - 0.00000 - 61.4 15s
1657 1603 0.00000 34 23 - 0.00000 - 70.1 20s
2086 1928 0.00000 60 124 - 0.00000 - 74.9 25s
H 2644 2144 5.0000000 0.00000 100% 74.7 28s
2744 2174 0.00000 95 59 5.00000 0.00000 100% 78.4 30s
3024 2328 1.00000 112 46 5.00000 0.00000 100% 97.9 35s
3369 2602 2.00000 135 22 5.00000 0.00000 100% 106 40s
4364 3239 1.00000 85 109 5.00000 1.00000 80.0% 106 46s
4980 3702 1.00000 60 100 5.00000 1.00000 80.0% 111 50s
5476 3890 1.00000 75 120 5.00000 1.00000 80.0% 117 55s
6603 5042 1.00000 90 115 5.00000 1.00000 80.0% 119 62s
7468 5636 2.00000 108 17 5.00000 1.00000 80.0% 119 66s
8175 6418 1.00000 67 116 5.00000 1.00000 80.0% 120 71s
8754 7238 1.00000 146 38 5.00000 1.00000 80.0% 124 75s
9989 8345 2.00000 53 53 5.00000 1.00000 80.0% 121 80s
11293 9704 4.00000 108 17 5.00000 1.00000 80.0% 119 85s
1945012 1676420 4.00000 142 5 5.00000 3.00000 40.0% 108 95235s
1946308 1677555 3.00000 171 23 5.00000 3.00000 40.0% 108 95383s
1947705 1678779 3.00000 216 27 5.00000 3.00000 40.0% 108 95555s
1949175 1679956 3.00000 174 16 5.00000 3.00000 40.0% 108 95743s
1950690 1680877 3.00000 153 14 5.00000 3.00000 40.0% 108 95854s
1950806 1680877 4.00000 153 12 5.00000 3.00000 40.0% 108 95855s
1951841 1682083 3.00000 31 20 5.00000 3.00000 40.0% 108 96009s
1952004 1682083 3.00000 208 20 5.00000 3.00000 40.0% 108 96010s
1953265 1683511 3.00000 158 21 5.00000 3.00000 40.0% 108 96245s
1955041 1684534 4.00000 181 18 5.00000 3.00000 40.0% 108 96393s
1956286 1685826 3.00000 156 45 5.00000 3.00000 40.0% 108 96546s
1957784 1687036 3.00000 91 70 5.00000 3.00000 40.0% 108 96689s
1959168 1688207 4.00000 183 19 5.00000 3.00000 40.0% 108 96843s
1960549 1689396 4.00000 76 81 5.00000 3.00000 40.0% 108 97005s
1961974 1690339 4.00000 136 29 5.00000 3.00000 40.0% 108 97126s
1963143 1691633 4.00000 194 9 5.00000 3.00000 40.0% 108 97289s
1963678 1691633 3.00000 221 34 5.00000 3.00000 40.0% 108 97290s
1964695 1692775 4.00000 213 10 5.00000 3.00000 40.0% 108 97500s
1966085 1693856 4.00000 189 10 5.00000 3.00000 40.0% 108 97661s
1967438 1695055 4.00000 151 24 5.00000 3.00000 40.0% 108 97828s
1968921 1696134 4.00000 193 9 5.00000 3.00000 40.0% 108 97968s
1970222 1697247 4.00000 194 19 5.00000 3.00000 40.0% 108 98120s
1971577 1698354 3.00000 114 7 5.00000 3.00000 40.0% 108 98271s
1972934 1699445 4.00000 116 31 5.00000 3.00000 40.0% 108 98431s
1974211 1700860 3.00000 114 58 5.00000 3.00000 40.0% 108 98586s
1975836 1702075 4.00000 100 67 5.00000 3.00000 40.0% 108 98744s
1976371 1702075 4.00000 105 21 5.00000 3.00000 40.0% 108 98745s
1977269 1703401 4.00000 227 22 5.00000 3.00000 40.0% 108 98926s
1978869 1704634 infeasible 246 5.00000 3.00000 40.0% 108 99115s
1980458 1705861 infeasible 143 5.00000 3.00000 40.0% 108 99274s
1980647 1705861 4.00000 157 20 5.00000 3.00000 40.0% 108 99275s
1981945 1707001 4.00000 126 55 5.00000 3.00000 40.0% 108 99443s
1983289 1708500 infeasible 143 5.00000 3.00000 40.0% 108 99622s
1985056 1709349 3.00000 144 30 5.00000 3.00000 40.0% 108 99749s
1986087 1710525 4.00000 132 23 5.00000 3.00000 40.0% 109 99906s
1987487 1711020 3.00000 132 16 5.00000 3.00000 40.0% 109 100006s
Explored 1988084 nodes (215768956 simplex iterations) in 100008.49 seconds (8797.40 work units)
Thread count was 16 (of 16 available processors)
Solution count 111: 5 5 5 ... 5Yes, I tested different models and found that parameters.
Thank you,
Best regards,
Carlos
0 -
Hi Carlos,
Thank you for the log output.
Just from looking at the log, I am skeptical that just playing with the parameters would help here. I think that adding cuts by hand or a different formulation are the way to go here. Maybe the Tech Talk about Strong MIP formulations can come in handy.
You are saying that you are interested in all specific solutions. Do you think that you could somehow take advantage of this information in a MIPSOL callback by introducing a user cut or a lazy constraint?
Best regards,
Jaromił0 -
Hello again Jaromil,
I was going through the rabbit hole of strengthening the formulation and from my perspective is exactly what I need. The problem is that I am not sure where to start, I´ve seen both tech talks and I believe a weak formulation might be my bottleneck.
I tried implementing lazy constraints and cuts but it dind´t work, it actually hurt performance. Could you suggest a way to plot the solution space to look for the culprit?
Thanks for any comment,
Best regards,
Carlos
1
Please sign in to leave a comment.
Comments
4 comments