Solving multiple independent LPs in parallel in python
Good afternoon,
My name is Boyu. I am a college student and newbie in python and Gurobi. Currently, one step of my model is solving 48 independent LPs. These LPs are independent and each has the same number of variables and constraints. The only difference between these LPs is the values of the coefficient and they are all known before running the model.
I start by building a small example which is similar to the real case in my model, the code is shown below:
First, I defined a function which is used for building and solving a single LP, where a,b,c,d are dictionaries for the coefficients which are known, the m is the dictionary for the LPs, and onject_val is a dictionary used for recording the object value for each LP:
from gurobipy import *
from gurobipy import GRB
def test(i):
# Create a new model
m[i] = Model()
# Create variables
x[i] = m[i].addVar(vtype=GRB.BINARY)
y[i] = m[i].addVar(vtype=GRB.BINARY)
z[i] = m[i].addVar(vtype=GRB.BINARY)
# Set objective
m[i].setObjective(x[i] + y[i] + 2 * z[i] , GRB.MAXIMIZE)
# Add constraint: x + 2 y + 3 z <= 4
m[i].addConstr(x[i] + a[i] * y[i] + b[i] * z[i] <= c[i])
# Add constraint: x + y >= 1
m[i].addConstr(x[i] + y[i] >= d[i])
# Optimize model
m[i].optimize()
object_val[i]=m[i].objVal
Them, I input the coefficient values:
a={}
b={}
c={}
d={}
object_val={}
x={}
y={}
z={}
m={}
a[1]=2
b[1]=3
c[1]=4
d[1]=1
a[2]=2
b[2]=5
c[2]=2
d[2]=7
a[3]=8
b[3]=6
c[3]=3
d[3]=3
a[4]=7
b[4]=8
c[4]=5
d[4]=2
a[5]=3
b[5]=5
c[5]=7
d[5]=2
a[5]=6
b[5]=8
c[5]=3
d[5]=9
Then I tried two approaches for running them parallel, the first approach utilized the joblib package and multiprocessing package:
from joblib import Parallel, delayed
import multiprocessing
num_cores = multiprocessing.cpu_count()
Parallel(n_jobs=num_cores)(delayed(test)(i) for i in [1,2,3,4,5])
However, the error occurs:
AttributeError: b"Unable to retrieve attribute 'objVal'"
The second approach I tried is using the multiprocessing package only:
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=4)
pool.map(test, [1,2,3,4,5])
pool.close()
pool.join()
print('done')
However, this approach will never finish running.
Could anybody give me some help for that? I am a newbie for gurobi and python and it will be really really appreciated if someone can give me some help.
Thanks.
Boyu

Hi Boyu,
Before trying to run in parallel, I recommend testing your models one after the other. They are very small and even sequentially, Gurobi should solve all 48 of them in just a few seconds.
Furthermore, you need to enable the shared memory option in joblib to make it possible to write to the dictionaries you defined for the solution output:
Parallel(n_jobs=num_cores, require='sharedmem')
You also need to check the solving status before querying the objective value:
if m[i].Status == GRB.OPTIMAL:object_val[i]=m[i].objValelse:object_val[i]=''In summary, I cannot recommend running Gurobi in parallel like this. For harder problems, single jobs will fight for computing resources and you might run into all kinds of hard to debug situations. You should get better performance by just solving the models sequentially.Cheers,
Matthias 
Hi Matthias,
Currently, I am trying another way to do that but "KeyError: '__getstate__'" happened.
First, I start building 5 LPs sequentially:
from gurobipy import *
from gurobipy import GRBa={1:2,2:2,3:8,4:7,5:3}
b={1:3,2:5,3:6,4:8,5:5}
c={1:4,2:2,3:3,4:5,5:7}
d={1:1,2:7,3:3,4:2,5:9}
object_val={}
x={}
y={}
z={}
m={}for i in [1,2,3,4,5]:
# Create a new model
m[i]=Model()
# Create variables
x[i] = m[i].addVar(vtype=GRB.CONTINUOUS)
y[i] = m[i].addVar(vtype=GRB.CONTINUOUS)
z[i] = m[i].addVar(vtype=GRB.CONTINUOUS)
# Set objective
m[i].setObjective(x[i] + y[i] + 2 * z[i] , GRB.MAXIMIZE)
# Add constraint: x + a y + b z <= c
m[i].addConstr(x[i] + a[i] * y[i] + b[i] * z[i] <= c[i])
# Add constraint: x + y >= 1
m[i].addConstr(x[i] + y[i] >= d[i])Second, I defined the function to solve a single LP model and save it as "test.py":
def test(i):
# Optimize model
m=i[1]
m.optimize()
return m.objValThird, I create the input data for the function will solved by parallel:
inputs=[]
for i in [1,2,3,4,5]:
inputs.append([i,m[i]])Finally, I tried to use "multiprocessing" package to solve these 5 LPs parallelly:
import test
import multiprocessing
if __name__ == '__main__':
pool = multiprocessing.Pool(processes=4)
pool.map(test.test, inputs)
pool.close()
pool.join()
print('done')However, an error occurs, it said "KeyError: '__getstate__'"
Please sign in to leave a comment.
Comments
2 comments