Skip to main content

Constraint adding efficiency with python

Answered

Comments

3 comments

  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Sebastian,

    Could you provide a minimal working example in order make your issue reproducible?

    Best regards,
    Jaromił

    0
  • Sebastian Guerraty
    Gurobi-versary
    First Comment
    First Question

    Hi, 

    Sorry for my delay getting back to you. I'll work on having a minimal working example. 

    In the meantime some additional information to provide context: the time it takes to solve the model is about 20 seconds, and it takes about a minute to add all the constraints. My goal is to be able to build and solve the model in under a minute. 

    I did manage to considerably reduce the time it takes to add the constraints from about 60s to about 40s by switching from using tupple lists type of objects to a tupple dict by using:

    mod.addConstrs(var.sum('*',x) for x in dict.keys())

    instead of using quicksum with the list implementation:

    mod.addConstrs(gp.quicksum(var[tupple[0], x] for tupple in var.select('*',x) for x in dict.keys())

    in case that might help someone :)

    In the meantime, I am currently seeing 100% utilization on a single core on the machine when adding constraints and have more than a single type of constraint ( each one being a mod.addConstrs() ).

    Is there a way I could separate each constraint type in a different core/process and add them to the model in parallel? I though of using pythons built in multiprocessing module but have read that gurobi is not thread safe, so I'm not sure how to implement this or maybe if there is another option using python. 

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Sebastian,

    You could try to use a list for your \(\texttt{x}\) variables instead of accessing it via the call to \(\texttt{dict.keys()}\) and then use \(\texttt{quicksum}\) instead of \(\texttt{sum}\). Note that there are even faster ways for constructing large expressions as described in the documentation of quicksum.

    Using multiple cores to construct a model seems like a tricky task to do properly, because all threads have to work with the same objects (model, variables). In order to assure thread-safety one would have to provide a certain degree of determinism to the parallel construction which most likely would nullify the parallel gain. However, using multiple threads might work if the constraints you build need to access independent objects to be constructed.

    Best regards,
    Jaromił

    0

Please sign in to leave a comment.