Memory and solve time for Gurobi optimisation loop increases over time
Awaiting user inputContext
I am using Gurobi (v12.0.1.) to solve an optimisation problem with ~15K variables, ~15K constraints and 1 objective on a loop ~100K times. In each iteration of the loop, I perform four main actions:
- Delete subset of variables and constraints
- Call the "model.reset(1)" function to reset the model to its unsolved state.
- Add a renewed set of variables and constraints (based on an updated dataset)
- Optimise the model.
Over time, the number of variables and constraints remains stable.
Problem
Originally, each solve takes approximately 0.1 seconds. However, there is a sharp jump in the time taken to solve the approximately 10% of the time taken to run the script. After this point, there is a steady increase in the solve time. This increased solve time corresponds with a progressive increase in memory usage across the lifetime of the script.
Similar issues raised in the forum
Other posts in this forum on similar topics have included:
- Why using gurobi solver in loop could cause increase the optimisation time
- How to decrease the running time for a sequence optimisation
- Solution time drastically increased when in a loop?
- Gurobi running out of memory in loop, but deleting environment every iteration
I have confirmed that the model remains the same size, have correctly used 'model.reset(1)' to discard previous solutions/MIP starts, and am not looking to perform a stochastic optimisation (i.e. with multiple scenarios) at this time.
Request for assistance
I'd appreciate any assistance the Gurobi team can give me in this regard. Happy to provide a link to my private Github repository at your request.
-
Hi Joe,
No answers for you at this stage unfortunately, but can you clarify how you are "[adding] a renewed set of variables and constraints" at each iteration, yet "the number of variables and constraints remains stable"?
On first glance these seem contradictory.
How are you analyzing results? Are you using the time reported by the solver, or the logs, or are you performing your own timing? I think it would be interesting to look at work units as a time series. This will tell you if Gurobi is needing to do more to solve the model as time progresses, or if the machine is performing worse as time progresses. If you are writing log files as you go then gurobi-logtools could be used to wrangle this data.
- Riley
0 -
Hi Riley,
Thanks for coming back to me.
1. I am progressively deleting and re-adding variables + constraints
I am progressively deleting and re-adding the relevant constraints that required an updated set of co-efficients/constants. The standard structure of the update process is shown in the first code segment. I have then included an extract of the code from the 'delete_energy_revenue' function below.def update_model()
#region Delete standard revenue constraints (Mandatory - Options Available)
try:
self.delete_energy_revenue()
except Exception as e:
print(f"An error occurred when deleting the energy revenue constraints: {e}")
traceback.print_exc()
sys.exit()
#endregion
#region Reset model (to remove warm start info)
try:
self.model.reset(clearall=1)
except Exception as e:
print(f"An error occurred when resetting the model: {e}")
traceback.print_exc()
sys.exit()
#endregion
#region Set params
try:
self.set_params()
except Exception as e:
print(f"An error occurred when setting parameters: {e}")
traceback.print_exc()
sys.exit()
#endregion
#region Add standard revenue constraints (Mandatory - Options Available)
try:
self.energy_revenue()
except Exception as e:
print(f"An error occurred when setting revenue constraints: {e}")
traceback.print_exc()
sys.exit()
#endregiontry:
self.model.remove(self.model.getConstrByName("Calculate_forecast_headroom"))
except Exception as e:
self.logger.error("An error has occurred removing the constraints Calculate_forecast_headroom: {e}")2. Memory and optimisation time grows over time(a) Script-wide loggingThe script-wide CPU/memory logging has returned a similar result.(b) Gurobi environment loggingI have recorded the optimisation times, work and memory usage using Gurobi's custom parameters. They suggest a growth in both memory usage and optimisation time over the loop. Please note that the flat lines indicate a lack of data (due to an issue with saving the data). Let me know if its worth re-running the test to return a complete dataset.0 -
Hi Joe,
So it looks like the problems are not getting harder, but Gurobi is taking longer to do the same amount of work.
This could be an interesting experiment: at each iteration instead of adding new updated constraints, add back in the constraints and variables you just deleted, so that the model is the same. If we still see an increase in time and memory then it would suggest memory isn't being freed and the solver is being slowed by either lack of memory available or memory fragmentation.
Are you also using `self.model.remove` for the variables you are deleting?
- Riley
0 -
Hi Riley,
1. The form of the optimisation problem is not changing significantly in each iteration
I am adding constraints with the same form in each iteration. The only change is the co-efficients/constants that are included in the constraints. For example, I might delete and re-add the following constraint - with the only change being the shift from self.RRP_dict[di] == 100 --> self.RRP_dict[di] == 200.
self.model.addConstr(self.market_revenue_spot[di] == self.RRP_dict[di] * (
self.gen_mlf * self.bess_sell_energy_MWh[di] - self.load_mlf*self.bess_buy_energy_MWh[di]),
name=f"Calculate_market_revenue_from_spot_market_in_each_dispatch_interval_{di}")There is no evidence of any 'seasonal' trend in the data that would lead to a permanent, structural change in the type of problems being solved in each iteration. Please let me know whether it would still be useful to design a script to run this test.
2. I am removing and re-adding variables in the same approach
I can confirm that I am remove variables with each iteration with the same approach. I have included an example below.
# Remove dictionary variables
for di in range(len(self.T_i)):
var_name = f"net_market_revenue_{di}"
try:
self.model.remove(self.model.getVarByName(var_name))
except Exception as e:
print(f"Variable {var_name} does not exist: {e}")3. Memory limits/fragmentation
I'd appreciate more guidance on how to establish whether the issue is stemming from insufficient/fragmented memory.
Cheers,
Joe
0 -
Hi Joe,
At the end of your code, when the model is disposed of (either by calling model.close(), model.dispose()) do you see the memory drop back down?
When you call model.remove() it will remove the object from the model (in the C layer) but if there's any reference to it in Python then the associated Python object will still exist and consume memory. Can you check that all references to the deleted variables and constraints are removed.
At the end of each solve you could also log info about the number of objects which might shed some light on it:
import gc
...
gc.collect() # Force garbage collection
print(f"Unreachable objects: {gc.collect()}")
print(f"Garbage collector objects: {len(gc.get_objects())}")I haven't used it myself but objgraph looks like it could come in handy.
- Riley
0
Please sign in to leave a comment.
Comments
5 comments