Gurobi: Define a variable as a function of itself
AnsweredHello everybody,
I am using the R package of Gurobi, but I am open to answers for any framework of Gurobi :)
I am wondering if there is a way (and if it makes sense) to define a Linear Programming problem as the maximization of some variables with constant coefficients:
max 1*x1 + 1*x2 + ....+ 1*xN
and then define each of the variables x as a function of theirselves:
max 1*x1 + 1*x2 + ....+ 1*xN
s.t
x1 = f(x1)
x2 = f(x2)
.
.
xN = f(xN)
In this case I want to use the same f(x) for all the variables.
Note: the LP model would have more constraints but I want to ask you all if this is possible/feasible/coherent.
Thank you very much! :)

Hi Roger,
I am wondering if there is a way (and if it makes sense) to define a Linear Programming problem as the maximization of some variables with constant coefficients
This might make sense, especially for human readability of the problem. Gurobi will perform multiple presolve operations anyway and reformulate your problem.
then define each of the variables x as a function of theirselves
This is possible. In Python this would look something like
import gurobipy
from gurobipy import *
# define function f(x)
def function(x):
# here it is f(x) = 2*x^2 + 3*x
result = QuadExpr(2*x*x + 3*x)
return result
m = Model("myModel")
x = m.addVar(vtype=GRB.CONTINUOUS, name="x")
y = m.addVar(vtype=GRB.CONTINUOUS, name="y")
# f_x is a QuadExpr as defined above
f_x = function(x)
f_y = function(y)
# add the x = f(x) constraint
m.addConstr(x  f_x == 0, name="x=fx")
m.addConstr(y  f_y == 0, name="y=fy")
# construct objective function
m.setObjective(x + y, GRB.MAXIMIZE)
# write LP file for sanity check
m.write("myLP.lp")In the above I used QuadExpr to construct a quadratic expression. You can use any available expression you like, e.g., LinExpr or GenExpr, as long as you deal with it correctly when constructing constraints in the main function.
Best regards,
Jaromił0 
Thank you very much for your clear answer. Now, a few edits:
If instead of f(x) being the same function for all variables, I had N functions f_i(x) for i=1...N, and all these functions are logarithmic (i.e. f_i(x_i) = c_i * log(a_i * x_i + b_i)): Then the problem would be linear with respect to the functions f_i(x) but not with respect to the variables x_i... what class of problem would this be? Thank you! Regards
0 
This would be a (mixedinteger) nonlinear problem (MINLP). You can model the \(\log\) function in Gurobi via piecewiselinear approximation using the addGenConstrLog function. This will require the introduction of additional variables and equality constraints, because the function only accepts exactly one variable as input and output, i.e., it is used to model the term \(y = \log(x)\). This means that in order to model an equality constraint
\[c_i \cdot \log(a_i \cdot x_i + b_i)  x_i = 0\]
you would have to introduce 2 auxiliary variables and 2 equality constraints to model
\[\begin{align}
w_i  a_i \cdot x_i  b_i &= 0 \\
z_i  \log(w_i) &= 0 \text{ # this is where you use the addGenConstrLog function}\\
c_i \cdot z_i  x_i &= 0
\end{align}\]Best regards,
Jaromił0 
Hi Jaromil,
As a followup to the issue of "adding functions of variables" and your example code above, is it a problematic approach to
 first, add functions of variables as variables, e.g., f_x = model.addVar(...),
 and then insert an equality constraint to define the function, e.g., model.addConstr(f_x = QuadExpr(2*x*x + 3*x))
in terms of the convergence to an optimal solution? One reason for defining it this way could be to be able to retrieve the value of f_x in the solution just by calling its X attr  just like other variables.
I have a fairly lowdimensional problem that somehow does not converge and I'm suspecting the above approach I'm using in defining the functions of variables.
Kind regards,
Serhan
0 
Hi Serhan,
Introducing additional variables should not affect the convergence of the algorithm. Of course only to some extent, but as long as you do not introduce a vast number of those auxiliaries, there should be no harm in doing so.
I have a fairly lowdimensional problem that somehow does not converge and I'm suspecting the above approach I'm using in defining the functions of variables.
Did you try both versions? With and without auxiliaries?
Best regards,
Jaromił0 
Hi Jaromil,
Thank you for your answer.
Yes, I did in the meantime, and it seems it is not because of those auxiliaries, as you also pointed out.
Then I noticed that a division (X/Y) is causing the extended runtime although it was implemented through another auxiliary DY (where Y*DY=1 and Y>0) as described here and with 'NonConvex' set to 2.
Anyway, hopefully it will be sorted out :)
Kind regards
0 
Then I noticed that a division (X/Y) is causing the extended runtime although it was implemented through another auxiliary DY (where Y*DY=1 and Y>0) as described here and with 'NonConvex' set to 2.
It is expected that the model gets much hard when you introduce nonconvex terms, even if you introduce only a few.
You could try providing tight lower and upper bounds for all variables participating in nonlinear terms. Consider your particular case \(\frac{1}{y}\). Only providing a lower bound \(y > \epsilon\) with a small \(\epsilon\) is not helpful. Yes, you avoid division by \(0\), but the slope of the \(\frac{1}{y}\) term close to \(0\) increases rapidly and the slope when \(y\) is large is almost constant. This can lead to all sorts of numerical difficulties during the optimization process and ultimately to slow convergence. However, if you provide bounds of let's say \([10^2, 3]\) then the \(\frac{1}{y}\) is nicely bounded and should not lead to any additional trouble except for "just being" nonconvex.
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
7 comments