Skip to main content

The nonlinear function in the optimization objective contains optimization variables

Awaiting user input

Comments

4 comments

  • Eli Towle
    Gurobi Staff Gurobi Staff

    This is a follow-up from the post KeyError: 0. sum1=sum1+b0[combination[km]][0]*math.sqrt(beta0/(H**2+a[combination[km]][0]))*cov(k,combination[km]).

    Have you considered using Python's itertools package to generate the combinations/permutations you need? I think that would make your code easier to follow.

    If the number of constraints is too high to realistically add to the model, you could consider adding those constraints as lazy constraints in a callback. This is done in the tsp.py traveling salesman problem Python example.

    You can model nonlinear functions like the \(\log\) function via piecewise-linear approximation. In Python, you can add a piecewise-linear approximation of the \(\log\) function to your Gurobi model via Model.addGenConstrLog(). Note that this function can only be used to set one variable equal to the \(\log\) of another variable. Thus, to model something like \(u = \log(\sigma_Z^2 - \textrm{MSE})\), you will have to introduce auxiliary variables and instead model \( u = \log(w) \), \(w = \sigma_Z^2 - \textrm{MSE}\).

    0
  • Yali Chen
    Gurobi-versary
    Curious
    Collaborator

    Hi, Eli, based on your comments, I would like to continue asking you some questions.

    First, my objective is to choose M numbers from 1 to K, and for each combination I∈K, calculate the lg(σ2Z− MSE), and finally obtain the average value of K!/M!/(K-M)! combinations. If K=50, M=20, there are 50!/20!/30!=4.7129212e+13 combinations. If I use Python's itertools package to generate the combinations/permutations, whether I save the combinations as a list or a matrix, is the storage capacity sufficient? Considering the storage capacity, I use the depth-first search, so that I don't need to store all the combinations, but for each searched combination, calculate lg(σ2Z− MSE).

    Then, except for constraint (a), the other constraints are introduced to replace the log function in the optimization objective. Therefore, it is because of the equation of constraints (b~e) that the optimization objective can be fully expressed. So adding those constraints as lazy constraints in a callback doesn't apply?

    Next, although we have used depth-first search to construct an optimization objective to avoid the storage capacity problem, the constraint (d~e) still involves the number of combinations. How to write constraints to avoid the capacity problem? Note that the log function in the objective can only be used to set one variable equal to the log of another variable, and thus the constraint (d~e) are involved, how can we not perform this kind of log function processing?

    The above are all my questions, looking forward to your help! Many thanks!

    All the best,

    Yali

     

    0
  • Yali Chen
    Gurobi-versary
    Curious
    Collaborator

    are other optimizers more suitable for solving this problem?

    0
  • Eli Towle
    Gurobi Staff Gurobi Staff

    Are \( M \) and \( K \) fixed beforehand? If so, you can remove the \(M!(K-M)!/K!\) from the objective function, as it is a constant.

    If I use Python's itertools package to generate the combinations/permutations, whether I save the combinations as a list or a matrix, is the storage capacity sufficient? 

    Do you mean the amount of available memory on your machine? It depends on your machine. Note that these itertools functions are generators, meaning the combinations/permutations are generated "on-the-fly" as you iterate over them. If you convert this generator into a list, that will require storing every combination in memory. Additionally, adding variables/constraints to the model for every combination will also require a considerable amount of memory and make the model extremely large.

    Have you looked into the literature for how others solve problems like this? It's not very realistic to build a model with 40 trillion variables/constraints. Perhaps there is a sampling-based approach where you randomly sample \( n \) combinations from the set of all possible combinations, then optimize the model with respect to those \( n \) combinations.

    0

Please sign in to leave a comment.