Skip to main content

can't pickle pycapsule object multiprocess

Awaiting user input

Comments

5 comments

  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    At node (i), build the original model from scratch and add the additional cuts/variables which the node demands. This way is working but building the model from scratch is a computational burden. 

    This is the correct way to go here because the gurobipy module is not thread safe. This means that each model has to have its own environment within a parallel run and you cannot copy models from one environment to another.

    At node (i), copy the model from the parent node (p), and add additional cuts/variables. This way is not working, and I received the following error:

    I assume, you are trying to pickle a model to copy it over. A model object is not picklable, hence the error. Additionally, as mentioned above, copying models from one environment to another is currently not supported.

    Best regards, 
    Jaromił

    2
  • Florian Götz
    Gurobi-versary
    Conversationalist
    First Question

    Hi Jaromił, I'm struggling with the same problem as the post initiator. Silke Horn states on the following website that one can solve different optimization problems in parallel with multiprocessing if you stick with the given code example: https://support.gurobi.com/hc/en-us/articles/360043111231-How-do-I-use-multiprocessing-in-Python-with-Gurobi- So far this implementation works great. But since I want to measure the idealized times of both problems being solved in parallel (see below) and I can/should not make any assumptions about how the tasks are delegated to the cpu cores by the multiprocessing pool, Silkes example is not so useful to me.

    Is there no other way to pass a "solve_model" function to the pool that just solves the optimization problems with the modification that the problems do not need to be defined in "solve_model" itself?

    At the moment I'm trying to produce a minimal example that accomplishes just that:

    import numpy as np
    import gurobipy as gp
    from gurobipy import GRB
    import multiprocessing as mp
    import time

    #First model:
    my_model_1 = gp.Model("m1")
    x1 = my_model_1.addVar(name="x1")
    constr1 = my_model_1.addConstr(x1 <= 17)
    my_model_1.setObjective(2 * x1, GRB.MAXIMIZE)

    #Second model:
    my_model_2 = gp.Model("m2")
    x2 = my_model_2.addVar(name="x2")
    constr2 = my_model_2.addConstr(x2 <= 17)
    my_model_2.setObjective(3 * x2 , GRB.MAXIMIZE)

    def solve(m):
    m.optimize()
    return m

    all_models=[my_model_1,my_model_2]
        
    with mp.Pool() as pool:
    start=time.time()
      result=pool.map(solve,[model for model in all_models]) #Here I get the same error message: "cannot pickle 'PyCapsule' object".
    end=time.time()
    idealized_time=end-start


    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Florian,

    Is there no other way to pass a "solve_model" function to the pool that just solves the optimization problems with the modification that the problems do not need to be defined in "solve_model" itself?

    Unfortunately, it is currently not possible.

    You could try generating model files (MPS for example) via the write method and then reading them in each solve_model call instead of constructing the models from scratch. You could also measure the time needed to construct/read the model and substract that from the overall time for a solve_model call.

    But since I want to measure the idealized times of both problems being solved in parallel (see below) and I can/should not make any assumptions about how the tasks are delegated to the cpu cores by the multiprocessing pool, Silkes example is not so useful to me.

    If the models are of similar size and your run enough trials for your tests then you should be safe to just ignore the model construction/reading time and still get a valid statement for your parllelization experiments.

    Best regards, 
    Jaromił

    0
  • Florian Götz
    Gurobi-versary
    Conversationalist
    First Question

    Hi Jaromił,

    You could try generating model files (MPS for example) via the write method and then reading them in each solve_model call instead of constructing the models from scratch. 

    Thanks for the insight. I have implemented your idea as indicated. However, it is interesting that I get less accurate results when I proceed as follows:

    Instead of saving the finished but unsolved model as an MPS file, I could save it in a list of "ready to solve" models. Thus no reading/writing to the hard drive would be necessary. Then I pass an index to the solve method that points to the corresponding model in the list. But here I lose accuracy of the solutions.

    Could you please explain to me why this happens?

    Best regards,

    Florian

     

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Florian,

    Could you please share a minimal reproducible example of the issue you describe?

    Best regards, 
    Jaromił

    0

Please sign in to leave a comment.