Constraint adding efficiency with python
AnsweredHi there!
Sorry to spam with more constraint adding threads, but other questions pointed to other type of issues and I did not want to mix issues.
I think that this is more of a python programing question, but have failed to find the solution to my issue.
I'm trying to solve a matching problem implemented with sparse variables (only the feasible ones, not sure if it is the correct terminology), and also trying to be able to build and solve the problem under a time limit.
The current model works and is solved in a reasonable time, but adding the constraints takes too much time (3 times as long as solving the model), there is a particular set of constraints that takes the longest to add. I have tried two ways of adding the constraints and both work but have similar elapsed times. Do you have any advice on how I could improve the performance of adding the constraints?
The variables are implemented using a gurobi tuple list and python dictionaries.
Already tried these ways:
attempt 1:
model.addConstrs((gp.quicksum(var_tuples[tup[0], tup[1], x] for tup in tuple_list.select('*','*',x)) + python_dict[x]['dict_key'] <= python_dict[x]['dict_key']) for x in python_dict.keys())
model.addConstrs((gp.quicksum(var_tuples[tup[0], tup[1], x] * (python_dict[x]['dict_key'] - python_timestamp - dt.timedelta(minutes= other_dict[tup[0]][x])).total_seconds() for tup in tuple_list.select('*','*',x)) >= 0) for x in python_dict.keys())
Attempt 2:
for x in python_dict.keys():
tuples_list = [tup for tup in tuple_list if tup[2] == x]
model.addConstr(gp.quicksum(var_tuples[tup[0], tup[1], x] for tup in tuples_list) + python_dict[x]['dict_key'] <= branch[x]['other_dict_key'])
model.addConstr(gp.quicksum(var_tuples[tup[0], tup[1], x] * (python_dict[x]['dict_key'] - python_timestamp - dt.timedelta(minutes= other_dict[tup[0]][x])).total_seconds() for tup in tuples_list) >= 0)
Any advice on how I could speed this up?
I do have more cores available to solve the maybe I could use multi processing or multithreading to add the constraints in parallel?
Hope you are safe with the pandemic situation and merry christmas ! :D
-
Hi Sebastian,
Could you provide a minimal working example in order make your issue reproducible?
Best regards,
Jaromił0 -
Hi,
Sorry for my delay getting back to you. I'll work on having a minimal working example.
In the meantime some additional information to provide context: the time it takes to solve the model is about 20 seconds, and it takes about a minute to add all the constraints. My goal is to be able to build and solve the model in under a minute.
I did manage to considerably reduce the time it takes to add the constraints from about 60s to about 40s by switching from using tupple lists type of objects to a tupple dict by using:
mod.addConstrs(var.sum('*',x) for x in dict.keys())
instead of using quicksum with the list implementation:
mod.addConstrs(gp.quicksum(var[tupple[0], x] for tupple in var.select('*',x) for x in dict.keys())
in case that might help someone :)
In the meantime, I am currently seeing 100% utilization on a single core on the machine when adding constraints and have more than a single type of constraint ( each one being a mod.addConstrs() ).
Is there a way I could separate each constraint type in a different core/process and add them to the model in parallel? I though of using pythons built in multiprocessing module but have read that gurobi is not thread safe, so I'm not sure how to implement this or maybe if there is another option using python.
0 -
Hi Sebastian,
You could try to use a list for your \(\texttt{x}\) variables instead of accessing it via the call to \(\texttt{dict.keys()}\) and then use \(\texttt{quicksum}\) instead of \(\texttt{sum}\). Note that there are even faster ways for constructing large expressions as described in the documentation of quicksum.
Using multiple cores to construct a model seems like a tricky task to do properly, because all threads have to work with the same objects (model, variables). In order to assure thread-safety one would have to provide a certain degree of determinism to the parallel construction which most likely would nullify the parallel gain. However, using multiple threads might work if the constraints you build need to access independent objects to be constructed.
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
3 comments