Erratic model infeasibility due to large objective function coefficients
AnsweredDear Gurobi community
We encounter odd behaviour with the Gurobi solver 8.1.1 when creating and solving the same model multiple times. Most of the times the Gurobi solver finds the same optimal solution. Sometimes, however, the model is infeasible or unbounded. Since we store the LP (and MPS) of each run, we could investigate the reason why the model was infeasible or unbounded in some runs. It turns out that the infeasibility or unboundedness is caused by coefficients in the objective function. The coefficients, which should be zero, are represented in most of the runs as very small numbers (6.91074e-310). In some runs, some of these coefficients are very large (2.2980248982433084e+69). This causes the model to become infeasible or unbounded. Note that these are coefficients of variables that we do not add explicitly to the objective function. The corresponding variables are only used in constraints and are added with the command m.addVars(group_ids, clique_ids, dialog_ids, name='d'). Is anyone aware of this behaviour? Is there something we can do to prevent this from happening?
Thank you very much for your help.
Best regards
Philipp
Some additional information:
- Programming language: Python
- Platform: Linux
- The large coefficients also appear in the MPS files.
- We add the decision variables that sometimes have these large coefficients after setting the objective.
- The constraint matrix is identical in all runs. Differences only occur with respect to the coefficients in the objective function.
- Here is an extract from the LP-file with large coefficients:
-------------------------------------------------------------------
+ 2.2980249818296836e+69 d[1222,4,0] + 0 d[1223,1,0]
+ 2.42092e-322 d[1223,2,0] + 1.47918e-315 d[1223,3,0]
+ 6.94211e-310 d[1223,4,0] + 0 d[1224,1,0] + 0 d[1224,2,0]
+ 1.63042e-322 d[1224,3,0] + 2.2980249818296836e+69 d[1224,4,0]
+ 2.16467e-316 d[1225,1,0] + 2.42092e-322 d[1225,2,0]
+ 1.47918e-315 d[1225,3,0] + 6.94211e-310 d[1225,4,0]
+ 1.47926e-315 d[1226,1,0] + 3.61763e-315 d[1226,2,0]
+ 2.42092e-322 d[1226,3,0] + 2.2980248982433084e+69 d[1226,4,0]
+ 3.95253e-323 d[1227,1,0] + 3.95253e-323 d[1227,2,0]
+ 2.37152e-322 d[1227,3,0] + 2.42092e-322 d[1227,4,0]
+ 5.56748e-319 d[1228,1,0] + 5.9823e-319 d[1228,2,0]
+ 8.51962e-319 d[1228,3,0] + 0 d[1228,4,0] + 4.94066e-324 d[1229,1,0]
+ 3.95253e-323 d[1229,2,0] + 0 d[1229,3,0] + 1.63042e-322 d[1229,4,0]
+ 6.94219e-310 d[1230,1,0] + 0 d[1230,2,0] + 1.63042e-322 d[1230,3,0]
+ 9.88131e-324 d[1230,4,0] + 0 d[1231,1,0] + 1.63042e-322 d[1231,2,0]
+ 2.5223e-319 d[1231,3,0] + 7.02843e-319 d[1231,4,0]
Subject To
-------------------------------------------------------------------
- And here is an extract of the LP-file with regular coefficients:
-------------------------------------------------------------------
+ 6.91074e-310 d[1224,3,0] + 6.91074e-310 d[1224,4,0]
+ 6.91074e-310 d[1225,1,0] + 6.91074e-310 d[1225,2,0]
+ 6.91074e-310 d[1225,3,0] + 6.91074e-310 d[1225,4,0]
+ 6.91074e-310 d[1226,1,0] + 6.91074e-310 d[1226,2,0]
+ 6.91074e-310 d[1226,3,0] + 6.91074e-310 d[1226,4,0]
+ 6.91074e-310 d[1227,1,0] + 6.91074e-310 d[1227,2,0]
+ 6.91074e-310 d[1227,3,0] + 6.91074e-310 d[1227,4,0]
+ 6.91074e-310 d[1228,1,0] + 6.91074e-310 d[1228,2,0]
+ 6.91074e-310 d[1228,3,0] + 6.91074e-310 d[1228,4,0]
+ 6.91074e-310 d[1229,1,0] + 6.91074e-310 d[1229,2,0]
+ 6.91074e-310 d[1229,3,0] + 6.91074e-310 d[1229,4,0]
+ 6.91074e-310 d[1230,1,0] + 6.91074e-310 d[1230,2,0]
+ 6.91074e-310 d[1230,3,0] + 6.91074e-310 d[1230,4,0]
+ 6.91074e-310 d[1231,1,0] + 6.91074e-310 d[1231,2,0]
+ 6.91074e-310 d[1231,3,0] + 6.91074e-310 d[1231,4,0]
Subject To
-------------------------------------------------------------------
-
Just to make sure I understand it correctly: you get these very strange values like 1e+69 or 1e-310 in the model, even though you never added those coefficients in your Python program?
My guess is that something in your program is wrong. You could debug the situation by writing out the MPS file after every single Python command that adds something to the model. Then, you can probably identify the model modification code that introduces those bogus coefficients.
Regards,
Tobias
0 -
Dear Tobias
Thanks for your prompt reply. Yes, these coefficients are not added in our Python program. The decision variables named d are just added with the command m.addVars(group_ids, clique_ids, dialog_ids, name='d'). They should have a coefficient of zero in the objective function.
I ran the program 200 times on my Windows machine and every time the coefficients are zero (see extract from the LP file below). I thus believe that the Python program is ok. The very large coefficients only occur (sometimes) when we run the exact same code on a Linux machine. Would you recommend to set the coefficients of zero explicitly when we generate the objective function as a linear expression?
Here is the extract from the LP file we obtain in every run on the Windows machine
+ 0 d[1224,2,0] + 0 d[1224,3,0] + 0 d[1224,4,0] + 0 d[1225,1,0]
+ 0 d[1225,2,0] + 0 d[1225,3,0] + 0 d[1225,4,0] + 0 d[1226,1,0]
+ 0 d[1226,2,0] + 0 d[1226,3,0] + 0 d[1226,4,0] + 0 d[1227,1,0]
+ 0 d[1227,2,0] + 0 d[1227,3,0] + 0 d[1227,4,0] + 0 d[1228,1,0]
+ 0 d[1228,2,0] + 0 d[1228,3,0] + 0 d[1228,4,0] + 0 d[1229,1,0]
+ 0 d[1229,2,0] + 0 d[1229,3,0] + 0 d[1229,4,0] + 0 d[1230,1,0]
+ 0 d[1230,2,0] + 0 d[1230,3,0] + 0 d[1230,4,0] + 0 d[1231,1,0]
+ 0 d[1231,2,0] + 0 d[1231,3,0] + 0 d[1231,4,0]
Subject To
0 -
This is really strange and seems to point to a bug, either in your code or in ours. Would it be possible to send me the Python code that produces this issue? Ideally, you would first simplify and reduce the Python code as much as possible to get a minimal size example that still shows the weird behavior.
Regards,
Tobias
0 -
Unfortunately, the code is confidential. But I will try to create a toy example that produces the same issue. Meanwhile, we will set the coefficients of all decision variables explicitly and see if the issue can be resolved in this way.
Thanks for your help!
Kind regards
Philipp
0
Please sign in to leave a comment.
Comments
4 comments