• Official comment
Gurobi Staff

Thanks for reaching out. I tried running that code you sent through and I get a feasible solution, although slightly different from the one you mention; mine has X=3. It is not clear to me why this is infeasible for you. Perhaps the indentation in python is causing differences?

Gurobi has some tools that will help detect where infeasibilities come from. I'd encourage you to read the following article: How do I determine why my model is infeasible?

Hope that helps.

Kind regards,
Steven

Thank you Steven. I restarted my kernel and got the same answer as you.

Now I was wondering about something else. As you can see I am calculating y based on z, and z based on x, for every time interval in range.

I define x, y, and z to be variables as dictionaries, with keys corresponding to each time interval. Is it possible to define z to not be a Gurobi variable, and instead be a simple dictionary, so that I can calculate z, instead of Gurobi solver trying to solve for z?

When I define z to be a dictionary, and not a variable, Gurobi tells me the model is infeasible.

• Gurobi Staff

Glad to hear you resolved the issue :-).

It isn't clear to me what you mean by "define z to be a dictionary and not a variable, Gurobi tells me the model is infeasible". I would encourage you to read our docs on Constraints. Constraints must be formatted in specific ways that these docs explain nicely. If you are accidentally adding constraints incorrectly, the model will often start behaving unpredictable, for example becoming feasible. I would still encourage you to also read that link that I sent previously, about working out why a model is infeasible.

Hope that helps!

Steve

My apologies Steven. Here is what I mean.

As you can see I defined z to be dictionary first. Then I said that for each key in z, z[time_interval] is defined as a Gurobi variable, suing the .addVar method.

Then later, under the 'for time_interval in range(4):' step, I define a constraint z[time_interval + 1] == z[time_interval] + x[time_interval], which effectively means that z at the next time interval is equal to z at the current time interval, added with x at the current time interval.

Then I define a constraint that calculates the value of y at the next time interval, as the product of y at the current time interval, multiplied with the z at the current time interval.

As you can see, even though I defined z at each time interval to be a Gurobi variable, it is simply calculated based on the value of x at the current time interval, since I specified the value of z at time interval 0.

What I am asking is, is there a way to define z, or define a model or a constraint involving z, such that I still calculate z at the next time interval based on x and z at current time intervals, and yet I do not need to define z at each time interval as a Gurobi variable (using .addVar method)?

I would like to define z without making it a Gurobi variable in order to speed up computation (hopefully I can speed it up this way).

I hope this clarifies things a little bit. My apologies for the lengthy message.

• Gurobi Staff

Thanks for the clarification. Yes, it should be possible to express the model just in terms of the x and y variables. Well really, you only need the first of the x variables, x[0].

From x[t+1] = x[t] + 1, you can get x[t] = x[0] + t. Then from z[0] = 1 and z[t+1] = z[t] + x[t], with a bit of thought, I can get z[t] = t*x[0] + t*(t-1)/2 + 1. You will want to check that but I think it is correct.

Given, y[t+1] = y[t] * z[t], you can simply replace the z[t] with the function above. However, as you point out I can't see why you would ever choose anything but y=0 for all values of t, but maybe you are still adding additional meaning to the model.

Hope that helps,

Steven

Hello Steven. Thank you for your previous comment.

I have a question about a slightly harder problem, in which I am trying to minimize the number of variables and constraints that I am using.

Here is the code:

----------------------------------------------------------------------------------

from gurobipy import Model, GRB

solver = Model()

num_time_intervals = 10

x = {}
for time_interval in range(num_time_intervals + 1):

x_inv = {}
for time_interval in range(num_time_intervals + 1):

y = {}
for time_interval in range(num_time_intervals + 1):

y_abs = {}
for time_interval in range(num_time_intervals + 1):

z = {}
for time_interval in range(num_time_intervals + 1):

w = {}
for time_interval in range(num_time_intervals + 1):

g = {}
for time_interval in range(num_time_intervals):

p = {}
for time_interval in range(num_time_intervals):

for time_interval in range(num_time_intervals):

solver.addConstr(-14 + g[time_interval] + p[time_interval] + y[time_interval] == 0)

solver.addConstr(x[time_interval + 1] == x[time_interval] - y[time_interval])
solver.addConstr(z[time_interval + 1] == z[time_interval] + 0.1*y_abs[time_interval] * x_inv[time_interval])

solver.addConstr(w[time_interval + 1] == 1 - 0.01*z[time_interval + 1])

solver.addConstr(x[time_interval] <= X * w[time_interval + 1])

solver.params.NonConvex = 2
solver.setObjective(X, GRB.MINIMIZE)
solver.optimize()

-------------------------------------------------------------------------------------

It is somewhat involved, but the aim is to minimize X. To minimize X I have to minimize x.

I believe it is intuitive that to minimize x, the constraint 'solver.addConstr(-14 + g[time_interval] + p[time_interval] + y[time_interval] == 0)' can be solved for y, by taking the maximum possible values of p and g. Once y is known, the values of x[time_interval + 1], z[time_interval + 1], and w[time_interval + 1] can be calculated.

You will notice that I can remove the w variable, and replace it by the expression '1 - 0.01*z[time_interval + 1]' in the constraint 'olver.addConstr(x[time_interval] <= X * w[time_interval + 1])'.

However, in the constraint 'solver.addConstr(w[time_interval + 1] == 1 - 0.01*z[time_interval + 1])', I cannot simply substitute the expression for z[time_interval + 1], as found in 'solver.addConstr(z[time_interval + 1] == z[time_interval] + 0.1*y_abs[time_interval] * x_inv[time_interval])', because then I would have no way to calculate z[time_interval + 1] to use it in the update for the next time interval.

So it appears, since z[time_interval] is updated recursively, and depends on its previous value, I cannot eliminate this variable from my model. What I want to know is, is there any way for me to not have to define z as the variable in my model (aka define it using Model.addVar() method), and yet still have a way to calculate w for every time interval?

Any help on this would be greatly appreciated. Thank you!

In addition to the above, maybe a slightly simpler problem can be considered. Again, I am trying to minimize the number of variables I am using in order to speed up the code. My eventual goal is to speed up the code. Here is the code:

---------------------------------------------------------------------

from gurobipy import Model, GRB

solver = Model()

num_time_intervals = 4

x = {}
for time_interval in range(num_time_intervals + 1):

y = {}
for time_interval in range(num_time_intervals + 1):

z = {}
for time_interval in range(num_time_intervals + 1):

p = {}
for time_interval in range(num_time_intervals):

for time_interval in range(num_time_intervals):

solver.addConstr(-6 + p[time_interval] + y[time_interval] == 0)

solver.addConstr(x[time_interval + 1] == x[time_interval] - y[time_interval])

- 0.01*y[time_interval] * x[time_interval])

solver.addConstr(x[time_interval] <= X * z[time_interval + 1])

solver.params.NonConvex = 2
solver.setObjective(X, GRB.MINIMIZE)
solver.optimize()

----------------------------------------------------------------------------

Just as in the post above, to minimize X, I need to minimize x. And to minimize x I need to minimize y on every time step. To minimize y I need to maximize p.

z[time_interval + 1] is then calculated from z, y, and x at the previous time step.

Is there a way to calculate z[time_interval + 1] without making it a Gurobi variable via the Model.addVar() method? Is there a way to define z in such a way as to make sure it gets calculated and multiplies X in the 'solver.addConstr(x[time_interval] <= X * z[time_interval + 1])' constraint, but not make Gurobi 'select' z and instead simply calculate it forward?

I am making an assumption here that reducing the number of variables will speed up my simulation. Please advise me if this may not be true. Thank you!

• Gurobi Staff

I had a quick look and I don't believe there is a way to remove the z variables completely (I might be wrong). Given the equality constraints in

- 0.01*y[time_interval] * x[time_interval])

You could simply replace the LHS with the RHS in the constraint

x[time_interval] <= X * (z[time_interval]
- 0.01*y[time_interval] * x[time_interval])

Although this is now a lot of quadratic constriants. Regarding,  I am making an assumption here that reducing the number of variables will speed up my simulation.

Reducing the number of variables does not always speed up the solve time, however is often a good starting point. I will point out though that you can see in the logs that Gurobi performs its own Presolve which will automatically reduce the number of variables and constraints.

For the current solve you can see this in the log in the following line:

Presolve removed 8 rows and 7 columns

Our Presolve algorithms are very sophisticated so will normally do an extremely good job at reducing the size of the problem.

If you are interested in improving solve time, I would instead recommend focussing on adding tighter upper bounds to your variables. Currently, you use a lot of float('inf'). However, your model has a number of bilinear constraints, which rely on tight variable bounds.

Hope that helps!

Steven