Constraint around a dot product of two columns
AnsweredIn my problem, sensarray is a two dimensional np array of constants, and plantvar is a twodimensional GRB.BINARY model variable. hvar is a onedimensional GRB:CONTINUOUS model variable. I want to add a set of constraints for the sum of the dot product of each column of plantvar and sensarray less hvar.
My line of code looks like this
for i in range(sensarray.shape[1]):
m.addConstr(sum(plantvar[:,i] @ sensarray[:,i])hvar[i] >=0)
I get a "MLinExpr' object is not iterable error and can't resolve it, whether I use elipses or slice notation to collect he terms in each column i. What am I doing wrong?

The expression \( \texttt{plantvar[:,i] @ sensarray[:,i]} \) is the scalarvalued dot product of two vectors (like \( c^\top x \)), not the elementwise product of two vectors. Thus, you don't need the \( \texttt{sum()} \) function here.
If this doesn't fix the problem, can you please post a minimal, selfcontained code example that shows exactly how you define all of these objects?
0 
Thank you, this fix seems to work.
A followup problem is to set a constraint on a twodimensional binary variable pvar. I want each row of the model variable to sum to 1.0, so that the row represents a selection of one timeslot in a schedule. When I add a constraint of the form:
m.addConstrs(pvar[i,:].sum == 1 for i in range(pvar[0]), this makes the model infeasible, and it shouldn't be. Am I doing something wrong here too?
0 
I don't see anything wrong with a constraint of that form. To identify the source of the infeasibility, you could try computing an irreducible inconsistent subsystem (IIS) and writing the infeasible subsystem to disk. E.g.:
m.computeIIS()
m.write('model.ilp')The IIS is a set of constraints and bounds that together form an infeasible subsystem. If any single variable/bound is removed from this subsystem, the resulting model becomes feasible.
0 
If I remove this constraint, the model solves, but with an unuseful result. If I make the equality an inequality '<= 1', then I get a TypeError: '<=' not supported between instances of 'method' and 'int', This makes me think something about the expression must be wrong, because this inequality should just result in a slightly different, but valid solution to the problem where some entire rows sum to zero instead of 1. Is the problem that this pvar is a BINARY variable?
The IIS computed shows 1 constraints, 0 bounds, and says "Encountered an attribute error'
0 
Sorry, I didn't realize
m.addConstrs(pvar[i,:].sum == 1 for i in range(pvar[0])
was your actual code and not some pseudocode. There are a few problems:
 \(\texttt{pvar[i,:].sum}\) does not call the MVar.sum() method. You need to add parentheses to the end.
 \( \texttt{pvar[0]} \) does not return the number of rows. Use \( \texttt{pvar.shape[0]} \) for this.
Altogether:
m.addConstrs((pvar[i, :].sum() == 1 for i in range(pvar.shape[0])), name='rowsum')
0 
Thank's so much Eli,
Sorry I'm so rusty at Python. I'm trying to dust off some coding skills from about 10 years ago and learn the Gurobi interface at the same time. It's a terrible feature of Python that there are some common syntax errors like this that create strange runtime consequences. The key part of my issue was the code for ".sum" without the "()".
My model is solving nicely now. I've inspected the results, and they appear to satisfy all constraints nicely. Surprise! Gurobi solver is also finding a much better solution than Open Solver could find, and in less than 1 second instead of 4 hours. It takes a bit more time for the Python code to compile and read the input than for the solver to do it's thing. This is not the first time I've found that Gurobi can find good solutions to massive binaryvariable matrix problems in amazingly short times. .I've got lots more work to do now, but you got me unstuck and I appreciate the help, especially in the evening and on a weekend. Here's the output of a recent run. Do you have any suggestions for tuning the optimizer?
Optimize a model with 1427 rows, 71552 columns and 82531 nonzeros
Model fingerprint: 0xb2c70793
Variable types: 52 continuous, 71500 integer (71500 binary)
Coefficient statistics:
Matrix range [1e+00, 7e+02]
Objective range [1e+00, 1e+00]
Bounds range [1e+00, 7e+03]
RHS range [1e+00, 1e+00]
Found heuristic solution: objective 0.0000000
Presolve removed 13 rows and 60534 columns
Presolve time: 0.09s
Presolved: 1414 rows, 11018 columns, 21984 nonzeros
Variable types: 0 continuous, 11018 integer (10966 binary)Root relaxation: objective 3.266960e+05, 2103 iterations, 0.03 seconds
Nodes  Current Node  Objective Bounds  Work
Expl Unexpl  Obj Depth IntInf  Incumbent BestBd Gap  It/Node Time0 0 326696.000 0 21 0.00000 326696.000   0s
H 0 0 326356.00000 326696.000 0.10%  0s
H 0 0 326392.00000 326696.000 0.09%  0s
H 0 0 326666.00000 326696.000 0.01%  0sExplored 1 nodes (5057 simplex iterations) in 0.61 seconds
Thread count was 4 (of 4 available processors)Solution count 4: 326666 326392 326356 0
Optimal solution found (tolerance 1.00e04)
Best objective 3.266660000000e+05, best bound 3.266960000000e+05, gap 0.0092%0 
Great, I'm glad that helped.
Gurobi has many different parameters that could affect performance. In this case, I would start by testing MIPFocus=1 or a more aggressive Heuristics setting, because Gurobi finds the (near)optimal solution at the root node via heuristic routines. I would also try Presolve=2 to see if Gurobi could benefit from trying to reduce the problem size further. You can set parameters with, e.g.,
m.params.MIPFocus = 1
You might have success with Gurobi's builtin tuning tool. The tuning tool solves a model with many different parameter combinations in an effort to improve performance. You can call the tuning tool from Python with the Model.tune() method. The small tune.py example is a good place to start.
0
Please sign in to leave a comment.
Comments
7 comments