The nonlinear function in the optimization objective contains optimization variables
Awaiting user inputWe use Gurobi to solve the following problem. Optimization variables are underlined in red. Since the optimization objective includes the log function containing optimization variables,
we introduce auxiliary variables which are underlined in green.
Then, we introduce the details. The objective is to choose M numbers from 1 to K, and for each combination I∈K, calculate the lg(σ2Z− MSE), and finally obtain the average value of K!/M!/(K-M)! combinations. For how to get the combination, we use the depth-first search approach. Below we give the code for the optimization objective.
import gurobipy as gp
from gurobipy import GRB
# create model
model = gp.Model("MyModel")
# declare variables
q = model.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a = model.addVars(K, name="a")
v = model.addVars(K, name="v")
r = model.addVars(K, name="r")
varpi = model.addVars(1, name="varpi")
nu = model.addVars(1, name="nu")
# set objective(The objective function is to choose M numbers from 1 to K,
# and for each combination, calculate the sum7, and finally obtain the sum
# value of K!/M!/(K-M)! combinations, which is shown as sumvalue.
# For how to get the combination, we use the depth-first search approach.
# In bold below are the optimization variables.
sumvalue = 0
combination = []
def dfs(last_idx, cnt):
global sumvalue
if cnt == M:
print(combination)
sum2 = 0
for km in range(0, M):
for km1 in range(0, M):
sum2 = sum2 + b0[combination[km]] * math.sqrt(
beta0
/ (
H**2
+ (q0[0] - upos[combination[km]][0]) ** 2
+ (q0[1] - upos[combination[km]][1]) ** 2
)
) * b0[combination[km1]] * math.sqrt(
beta0
/ (
H**2
+ (q0[0] - upos[combination[km1]][0]) ** 2
+ (q0[1] - upos[combination[km1]][1]) ** 2
)
) * cov(combination[km], combination[km1])
sum3 = 0
for km in range(0, M):
sum4 = 0
for km1 in range(0, M):
sum4 = sum4 + b0[combination[km1]] * math.sqrt(
beta0
/ (
H**2
+ (q0[0] - upos[combination[km1]][0]) ** 2
+ (q0[1] - upos[combination[km1]][1]) ** 2
)
) * cov(combination[km], combination[km1])
sum3 = sum3 + (
v[combination[km]]
- (
beta0
/ (
H**2
+ (q0[0] - upos[combination[km]][0]) ** 2
+ (q0[1] - upos[combination[km]][1]) ** 2
)
)
) * 1 / 2 * b0[combination[km]] * (
beta0
/ (
H**2
+ (q0[0] - upos[combination[km]][0]) ** 2
+ (q0[1] - upos[combination[km]][1]) ** 2
)
) ** (-1 / 2) * sum4 / (sum2 + N0w)
sum5 = 0
for km1 in range(0, M):
sum6 = 0
for km in range(0, M):
sum6 = sum6 + b0[combination[km]] * math.sqrt(
beta0
/ (
H**2
+ (q0[0] - upos[combination[km]][0]) ** 2
+ (q0[1] - upos[combination[km]][1]) ** 2
)
) * cov(combination[km], combination[km1])
sum5 = sum5 + (
v[combination[km1]]
- (
beta0
/ (
H**2
+ (q0[0] - upos[combination[km1]][0]) ** 2
+ (q0[1] - upos[combination[km1]][1]) ** 2
)
)
) * 1 / 2 * b0[combination[km1]] * (
beta0
/ (
H**2
+ (q0[0] - upos[combination[km1]][0]) ** 2
+ (q0[1] - upos[combination[km1]][1]) ** 2
)
) ** (-1 / 2) * sum6 / (sum2 + N0w)
sum7 = 2 * nu[0] - math.log(K**2) - (math.log(sum2 + N0w) + sum3 + sum5)
sumvalue = sumvalue + sum7
return
for idx in range(last_idx + 1, K):
combination.append(idx)
dfs(idx, cnt + 1)
combination.pop()
dfs(-1, 0)
obj = math.factorial(M) * math.factorial(K - M) / math.factorial(K) * sumvalue
model.setObjective(obj, GRB.MAXIMIZE)
But there are two problems with constraints.
1. You know that M numbers are selected from 1 to K. If K=50, the number of combinations will be very large. To avoid using an oversized matrix to store all combinations, we use depth-first search to form the optimization objective. But for constraints (d) and constraints (e), all combinations must satisfy equality constraints.
If it needs to be added before the optimization objective
model.setObjective(obj, GRB.MAXIMIZE)
appears, how should I add it to my dfs function? Since it is a constraint, what I understand is that it should not be listed separately before the objective appears, while other constraints are listed after the objective appears.
If it is added after the optimization objective, it will also involve the issue of memory capacity, and we cannot directly list the constraints of each combination.
2. For gurobi to solve the convex optimization problem, the log function in the optimization objective cannot contain variables or variable expressions, how to keep the log function without auxiliary variable replacement?
-
This is a follow-up from the post KeyError: 0. sum1=sum1+b0[combination[km]][0]*math.sqrt(beta0/(H**2+a[combination[km]][0]))*cov(k,combination[km]).
Have you considered using Python's itertools package to generate the combinations/permutations you need? I think that would make your code easier to follow.
If the number of constraints is too high to realistically add to the model, you could consider adding those constraints as lazy constraints in a callback. This is done in the tsp.py traveling salesman problem Python example.
You can model nonlinear functions like the \(\log\) function via piecewise-linear approximation. In Python, you can add a piecewise-linear approximation of the \(\log\) function to your Gurobi model via Model.addGenConstrLog(). Note that this function can only be used to set one variable equal to the \(\log\) of another variable. Thus, to model something like \(u = \log(\sigma_Z^2 - \textrm{MSE})\), you will have to introduce auxiliary variables and instead model \( u = \log(w) \), \(w = \sigma_Z^2 - \textrm{MSE}\).
0 -
Hi, Eli, based on your comments, I would like to continue asking you some questions.
First, my objective is to choose M numbers from 1 to K, and for each combination I∈K, calculate the lg(σ2Z− MSE), and finally obtain the average value of K!/M!/(K-M)! combinations. If K=50, M=20, there are 50!/20!/30!=4.7129212e+13 combinations. If I use Python's itertools package to generate the combinations/permutations, whether I save the combinations as a list or a matrix, is the storage capacity sufficient? Considering the storage capacity, I use the depth-first search, so that I don't need to store all the combinations, but for each searched combination, calculate lg(σ2Z− MSE).
Then, except for constraint (a), the other constraints are introduced to replace the log function in the optimization objective. Therefore, it is because of the equation of constraints (b~e) that the optimization objective can be fully expressed. So adding those constraints as lazy constraints in a callback doesn't apply?
Next, although we have used depth-first search to construct an optimization objective to avoid the storage capacity problem, the constraint (d~e) still involves the number of combinations. How to write constraints to avoid the capacity problem? Note that the log function in the objective can only be used to set one variable equal to the log of another variable, and thus the constraint (d~e) are involved, how can we not perform this kind of log function processing?
The above are all my questions, looking forward to your help! Many thanks!
All the best,
Yali
0 -
are other optimizers more suitable for solving this problem?
0 -
Are \( M \) and \( K \) fixed beforehand? If so, you can remove the \(M!(K-M)!/K!\) from the objective function, as it is a constant.
If I use Python's itertools package to generate the combinations/permutations, whether I save the combinations as a list or a matrix, is the storage capacity sufficient?
Do you mean the amount of available memory on your machine? It depends on your machine. Note that these itertools functions are generators, meaning the combinations/permutations are generated "on-the-fly" as you iterate over them. If you convert this generator into a list, that will require storing every combination in memory. Additionally, adding variables/constraints to the model for every combination will also require a considerable amount of memory and make the model extremely large.
Have you looked into the literature for how others solve problems like this? It's not very realistic to build a model with 40 trillion variables/constraints. Perhaps there is a sampling-based approach where you randomly sample \( n \) combinations from the set of all possible combinations, then optimize the model with respect to those \( n \) combinations.
0
Please sign in to leave a comment.
Comments
4 comments