Different best bounds for different python scripts that run precisely the same model for the same problem
OngoingHi,
I am solving scheduling problems with Python and Gurobi. Due to script being regularly updated, I save major updates as newer versions. Part of my script is a function which uses Gurobi to implement MIP with only binary variables.
One of my older versions solves a problem very efficiently (finds optimal solution in 2 minutes and best bound = 1906). The latest version performs very poorly for the exact same problem (Reaches time limit of 2 hours, finds a solution with 0.1% gap, and best bound = 1959). In the latest version I did some changes but none of those affecting the model formulation, or Gurobi's parameters, or the order of variables/constraints. I also noticed that the model fingerprints are different.
After many failed attempts to replicate the same results, I decided to copy the whole function from the older version and paste it to the latest version and even this attempt failed. In other words, I have two different python scripts, I run the exact same MIP function in both scripts, and obtain completely different results. How could this be possible? Why best bounds are different?
I would really appreciate your thoughts on this.
Kind regards,
Yaroslav
-
Hi Yaroslav,
Since you noticed that the fingerprints are different, this is an indication that different models are solved.
Did you try to export the model for both scripts to check if they indeed result in the same model formulation?Best regards,
Marika0 -
Hi Marika,
Thank you for your quick reply!
Checking both models would be extremely time-consuming as the model has 200k variables and 11k constraints. Maybe I will check when I find some free time. In the meantime, I decided to continue with the older version.
However, what I am more curious about is why Gurobi produces different models since the function is identical in both scripts? It seems as some external factors, which have nothing to do with the model formulation, are actually affecting the model formulation. The most odd observation is the difference in best bounds.
Anyway, thank you again, if I find what is wrong with it I will let you know.
Best regards,
Yaroslav
0 -
Could it be that this "external factor" is some data you read and that this differs in both scripts?
0 -
The reading of data that are used to construct the model is the same. I read data from an Excel file, the latest version has two additional sheets compared to the older. These additional data are used to select which constraints to include in the model and for setting weight values.
However, I use the function of the older version for which this new feature is not supported. As a result, these additional data are not used.
0
Please sign in to leave a comment.
Comments
4 comments