メインコンテンツへスキップ

Gurobi running out of memory in loop, but deleting environment every iteration

回答済み

コメント

12件のコメント

  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Tenzin,

    Is the out-of-memory error happening during model building or during the solution process?

    Could you please try using the dispose() method instead of close()?

    Best regards, 
    Jaromił

    0
  • Tenzin Frijlink
    First Comment
    First Question

    Hi Jaromil.

    Thank you for the reply and suggestion! I tried using dispose instead of close, but it does not seem to matter. Also for my interest, according to the documentation close and dispose are equal methods, so what is the use of the one over the other?

    After checking I think the out-of-memory error occurs during model building. This is also where it slows down significantly after roughly 200 iterations. I also kept track of the the run times reported by Gurobi and they remain at a few seconds, even before running out of memory, so I am fairly sure now the issue is at the model building. 

    It gets stuck for a few minutes at this stage sometimes (and sometimes recovers and sometimes gives out of memory error):

    Set parameter Username
    Set parameter WLSAccessID
    Set parameter WLSSecret
    Set parameter LicenseID to [redacted]
    Academic license   [redacted]
    Set parameter TimeLimit to value 10
    Set parameter MIPGap to value 0.01

    Whereas in other iterations it goes straight to this and then goes through the solving process:

    Set parameter Username
    Set parameter WLSAccessID
    Set parameter WLSSecret
    Set parameter LicenseID to value  [redacted]
    Academic license [redacted]
    Set parameter TimeLimit to value 10
    Set parameter MIPGap to value 0.01
    Gurobi Optimizer version 11.0.1 build v11.0.1rc0 (win64 - Windows 10.0 (19045.2))

    CPU model: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, instruction set [SSE2|AVX|AVX2]
    Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

     

    Thanks again,

    Tenzin

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Tenzin,

    Also for my interest, according to the documentation close and dispose are equal methods, so what is the use of the one over the other?

    This was just to check whether one or the other has a bug, but since both behave the same, it should be alright.

    Could you try computing how many variables and constraints you are trying to add in each iteration? In particular could you please compute the number of variables and constraints you are trying to add in the iteration when you seem to run into the memory issue? It is possible that your machine just runs out of memory due to model size. Could you also please try tracking the memory consumption to see whether the memory drops after an optimization process has finished. This is to make sure that the model and environment are properly disposed.

    Best regards, 
    Jaromił

    0
  • Max Berktold
    First Comment

    I have exactly the same Problem. This only seems to be the case with the WLS license. On my first laptop i don't have this issue. I would really love to see a solution to this problem.

     

    Here my code to recreate the problem, you can literally see the steady memory increase in the task manager.

    import time
    import gurobipy as gp
    from gurobipy import GRB
    import numpy as np

    def create_large_random_milp(model, num_vars, num_constraints, density=0.1, integer_ratio=0.5, seed=None):

        if seed is not None:
            np.random.seed(seed)

        # Generate random objective coefficients
        obj_coeffs = np.random.rand(num_vars)

        # Generate variable types (binary, integer, or continuous)
        var_types = np.random.choice([GRB.BINARY, GRB.INTEGER, GRB.CONTINUOUS], size=num_vars,
                                     p=[integer_ratio / 2, integer_ratio / 2, 1 - integer_ratio])

        # Create variables
        vars = []
        for i in range(num_vars):
            var = model.addVar(vtype=var_types[i], obj=obj_coeffs[i], name=f'x_{i}')
            vars.append(var)

        # Generate random constraints
        for j in range(num_constraints):
            # Select a random subset of variables for each constraint
            subset = np.random.choice(vars, size=int(density * num_vars), replace=False)

            # Generate random coefficients for the selected variables
            coeffs = np.random.rand(len(subset))

            # Generate a random right-hand side
            rhs = np.random.rand()

            # Add the constraint to the model
            model.addConstr(gp.quicksum(coeffs[k] * subset[k] for k in range(len(subset))) <= rhs, name=f'c_{j}')

        # Set model to maximize or minimize
        model.ModelSense = GRB.MAXIMIZE

        # Integrate new variables into the model
        model.update()
        model.optimize()


    # Example usage:
    num_vars = 20000  # Number of variables
    num_constraints = 200  # Number of constraints
    density = 0.05  # Density of each constraint (percentage of variables involved in each constraint)
    integer_ratio = 0.3  # Ratio of integer variables (binary + integer)
    seed = 42  # Random seed for reproducibility


    for i in range(1, 5):
        env = gp.Env()
        model = gp.Model(env=env)

        print('Starting:', i)
        print(create_large_random_milp(model, num_vars, num_constraints, density, integer_ratio, seed))
        print('Finished', i)
        time.sleep(5)

        model.close()
        env.dispose()

     

    0
  • Max Berktold
    First Comment

    For me the problem was python312. When downgrading to python310 this problem did not occur.

    Greetings!

    0
  • Can Akkan
    First Comment

    I have had a similar problem.  I am solving MIP models repeatedly (hundreds of times) using gurobi (v. 11.01) python . m.dispose() was freeing up only a small fraction of the memory allocated during model creation and solving when I used python312. As suggested above I switched to python 310, and the problem is solved. Thanks.

    0
  • Haoxiang Yang
    Gurobi-versary
    First Comment

    Hi, I had the same problem with Python 312. The memory usage seems to pile up even when I tried "del model". The issue was solved when I switched to Python 310. Thanks!

    0
  • Simon Bowly
    Gurobi Staff Gurobi Staff

    Hi all, sorry for the delayed follow-up. Is everyone experiencing this issue using numpy in some way? If anyone has a minimal working example where the memory issue occurs and numpy is *not* involved that would be helpful.

    0
  • Can Akkan
    First Comment

    Dear Simon

    I am using numpy. So unfortunately I do not have such an example.

    0
  • Alex Lusak
    First Comment

    I'd also like to share that I was having the same issue until I downgraded from 3.12 to 3.10. It seems like model building is consuming infinitely more memory on 3.12. On 3.10, I peaked around maybe 5 GB, while on 3.12 it was climbing all the way to 36 GB then could no longer solve as it exceeded RAM space.

    Is Gurobi actively working to fix this? This seems like a pretty insane difference in consumption just for changing the Python version.

    0
  • Mario Ruthmair
    Gurobi Staff Gurobi Staff

    Our developers narrowed it down to a memory leak related to the Cython version used to build gurobipy 11.0.x. We are using a newer Cython version for the coming 12.0.0 release, so the issue will be resolved there.
    As you have already realized, the issue seems to be triggered by passing a Var or list of Vars to np.array or np.asarray. This can happen in library code like cvxpy, and in some corners of our own matrix-friendly API, as well as directly in user code.
    Unfortunately, the only workaround is to use Python 3.11 or earlier with gurobipy 11.0.x.

    0

サインインしてコメントを残してください。