Skip to main content

Memory and solve time for Gurobi optimisation loop increases over time

Awaiting user input

Comments

5 comments

  • Riley Clement
    • Gurobi Staff Gurobi Staff

    Hi Joe,

    No answers for you at this stage unfortunately, but can you clarify how you are "[adding] a renewed set of variables and constraints" at each iteration, yet "the number of variables and constraints remains stable"?

    On first glance these seem contradictory.

    How are you analyzing results?  Are you using the time reported by the solver, or the logs, or are you performing your own timing? I think it would be interesting to look at work units as a time series.  This will tell you if Gurobi is needing to do more to solve the model as time progresses, or if the machine is performing worse as time progresses.  If you are writing log files as you go then gurobi-logtools could be used to wrangle this data.

    - Riley

    0
  • Joe Orange
    • First Question
    • First Comment

    Hi Riley, 

    Thanks for coming back to me. 

    1. I am progressively deleting and re-adding variables + constraints

    I am progressively deleting and re-adding the relevant constraints that required an updated set of co-efficients/constants. The standard structure of the update process is shown in the first code segment. I have then included an extract of the code from the 'delete_energy_revenue' function below. 
    def update_model()
    #region Delete standard revenue constraints (Mandatory - Options Available)
    try:
    self.delete_energy_revenue()
    except Exception as e:
    print(f"An error occurred when deleting the energy revenue constraints: {e}")
    traceback.print_exc()
    sys.exit()
    #endregion

    #region Reset model (to remove warm start info)
    try:
    self.model.reset(clearall=1)
    except Exception as e:
    print(f"An error occurred when resetting the model: {e}")
    traceback.print_exc()
    sys.exit()
    #endregion

    #region Set params
    try:
    self.set_params()
    except Exception as e:
    print(f"An error occurred when setting parameters: {e}")
    traceback.print_exc()
    sys.exit()
    #endregion

    #region Add standard revenue constraints (Mandatory - Options Available)
    try:
    self.energy_revenue()
    except Exception as e:
    print(f"An error occurred when setting revenue constraints: {e}")
    traceback.print_exc()
    sys.exit()
    #endregion
    try:
    self.model.remove(self.model.getConstrByName("Calculate_forecast_headroom"))
    except Exception as e:
    self.logger.error("An error has occurred removing the constraints Calculate_forecast_headroom: {e}")
     
    2. Memory and optimisation time grows over time
     
    (a) Script-wide logging
    The script-wide CPU/memory logging has returned a similar result. 
     
    (b) Gurobi environment logging
    I have recorded the optimisation times, work and memory usage using Gurobi's custom parameters. They suggest a growth in both memory usage and optimisation time over the loop. Please note that the flat lines indicate a lack of data (due to an issue with saving the data). Let me know if its worth re-running the test to return a complete dataset. 
    0
  • Riley Clement
    • Gurobi Staff Gurobi Staff

    Hi Joe,

    So it looks like the problems are not getting harder, but Gurobi is taking longer to do the same amount of work.

    This could be an interesting experiment: at each iteration instead of adding new updated constraints, add back in the constraints and variables you just deleted, so that the model is the same.  If we still see an increase in time and memory then it would suggest memory isn't being freed and the solver is being slowed by either lack of memory available or memory fragmentation.

    Are you also using `self.model.remove` for the variables you are deleting?

    - Riley

    0
  • Joe Orange
    • First Question
    • First Comment

    Hi Riley, 

    1. The form of the optimisation problem is not changing significantly in each iteration

    I am adding constraints with the same form in each iteration. The only change is the co-efficients/constants that are included in the constraints. For example, I might delete and re-add the following constraint - with the only change being the shift from self.RRP_dict[di] == 100 --> self.RRP_dict[di] == 200. 

    self.model.addConstr(self.market_revenue_spot[di] == self.RRP_dict[di] * (
    self.gen_mlf * self.bess_sell_energy_MWh[di] - self.load_mlf*self.bess_buy_energy_MWh[di]),
    name=f"Calculate_market_revenue_from_spot_market_in_each_dispatch_interval_{di}")

    There is no evidence of any 'seasonal' trend in the data that would lead to a permanent, structural change in the type of problems being solved in each iteration. Please let me know whether it would still be useful to design a script to run this test. 

    2. I am removing and re-adding variables in the same approach

    I can confirm that I am remove variables with each iteration with the same approach. I have included an example below. 

    # Remove dictionary variables
    for di in range(len(self.T_i)):
    var_name = f"net_market_revenue_{di}"
    try:
    self.model.remove(self.model.getVarByName(var_name))
    except Exception as e:
    print(f"Variable {var_name} does not exist: {e}")

    3. Memory limits/fragmentation

    I'd appreciate more guidance on how to establish whether the issue is stemming from insufficient/fragmented memory. 

    Cheers, 

    Joe

    0
  • Riley Clement
    • Gurobi Staff Gurobi Staff

    Hi Joe,

    At the end of your code, when the model is disposed of (either by calling model.close(), model.dispose()) do you see the memory drop back down?

    When you call model.remove() it will remove the object from the model (in the C layer) but if there's any reference to it in Python then the associated Python object will still exist and consume memory.  Can you check that all references to the deleted variables and constraints are removed.

    At the end of each solve you could also log info about the number of objects which might shed some light on it:

    import gc
    ...
    gc.collect() # Force garbage collection
    print(f"Unreachable objects: {gc.collect()}")
    print(f"Garbage collector objects: {len(gc.get_objects())}")

    I haven't used it myself but objgraph looks like it could come in handy.

    - Riley

    0

Please sign in to leave a comment.