Skip to main content

Vectorizing addition of constraints

Comments

6 comments

  • Greg Glockner
    Gurobi Staff Gurobi Staff

    Unfortunately, no. However, the following may be more efficient for this model since it avoids some intermediate Python objects:

    for i in range(u.shape[2]):
    model.addMConstrs(u[:, :, i], x, '>', v)
    0
  • Carlos Martin
    Gurobi-versary
    First Question
    Conversationalist

    @... That yields the following error:

    Traceback (most recent call last):
    ...
    model.addMConstrs(u[:, :, i], x, '>', v)
    File "model.pxi", line 3414, in gurobipy.Model.addMConstrs
    TypeError: object of type 'MVar' has no len()

     

    0
  • Carlos Martin
    Gurobi-versary
    First Question
    Conversationalist

    More generally, are there plans to add a completely vectorized interface like that of cvxopt.solvers.lp (ideally with the addition of a batch dimension)? This would be extremely useful.

    Currently cvxopt.solvers.lp (with GLPK) is >6 times faster for me, simply due to model construction/specification overhead. (My program needs to solve a large batch of linear programs, which presently involves constructing and solving models repeatedly.)

    0
  • Robert Luce
    Gurobi Staff Gurobi Staff

    Hello Carlos,

    I have taken note of your request for an API to solve many small models in bulk. In order to understand the overhead problem you are facing a little better, it would be very valuable if you could post a concise benchmark example here that demonstrates the enourmous overhead w.r.t. cvxpy.solvers.lp/GLPK.

    Greg's suggestion from above won't work (as you saw...) because the RHS argument to addMConstrs must be a data vector, not an MVar object. You would need to normalize the constraint u[:,:.i] @ x >= v  to the form A @ z >= 0 yourself in order to go through this function, but I'm afraid the benefit will be quite small.

    Thanks,

    Robert

    0
  • Carlos Martin
    Gurobi-versary
    First Question
    Conversationalist

    Robert Luce Here is an example: Solving batches of two-player zero-sum normal-form games. I get times like

    gurobi 0.37076228499999986
    cvxopt 0.009740684000000055

    gurobi 0.39596498599999985
    cvxopt 0.009115292999999802

    gurobi 0.3703212579999997
    cvxopt 0.007263605999999978

    when running on a 2.5 GHz Quad-Core Intel Core i7 processor with 16 GB 1600 MHz DDR3 memory.

    0
  • Carlos Martin
    Gurobi-versary
    First Question
    Conversationalist

    I see there's a Feature Requests subforum, should I post this there?

    I suppose the ideal interface might look something like

    def max_min(batch):
    model = gurobipy.Model()
    v = model.addMVar(batch.shape[0])
    x = model.addMVar((batch.shape[0], batch.shape[1]))
    model.setObjective(v, gurobipy.GRB.MAXIMIZE) # note this is vectorized
    model.addConstrs((
    x.sum(1) == 1,
    x >= 0,
    v[:, None] <= (batch * x[:, :, None]).sum(1),
    ))
    model.optimize()
    return model.objVal

    Alternatively, inputting batches of c, A, b, G, h matrices would work too.

    0

Please sign in to leave a comment.