Iterate over a Gurobi decision variable in an integrated normal distribution
回答済みMy problem: Iterating over Gurobi "Var" decision variable creates TypeError: '<' not supported between instances of 'Var' and 'int' and issue with exponentation (i.e. **/ pow())
Key information on the Gurobi optimization:
- Objective function: for three items maximize the sum of (price * expected value)
- Expected value is calculated through two defined formulas:
- 1) PDF = probability density function
- 2) EV = Expected value which is the integration of the PDF over a specific integral
- The decision variable "upperBound" is supposed to maximize the upper bound of this integral, the lower bound is 0
Below the model:
from gurobipy import *
import pandas as pd, numpy as np, scipy.integrate as integrate
import math
mu = pd.DataFrame(np.array([10, 15, 20]), index = ["product1", "product2", "product3"])
sigma = pd.DataFrame(np.array([1, 1.5, 2]), index = mu.index.values)
price = pd.Series(np.array([10, 10, 10]), index = mu.index.values)
m = Model("maxUpperBound")
var = m.addVars(mu.index.values, vtype=GRB.INTEGER, name="upperBound")
def PDF (y, mu, sigma):
return y * (1.0 / (sigma * (2.0 * math.pi)**(1/2)) * math.exp(-1.0 * (y - mu)**2 / (2.0 * (sigma**2))))
def EV(ub, mu, sigma):
lb = 0
value, error = integrate.quad(PDF, lb, ub, args=(mu, sigma))
return value
m.setObjective(
quicksum(
price.loc[i] * EV(var[i], mu.loc[i], sigma.loc[i])
for i in mu.index.values
),
GRB.MAXIMIZE)
m.addConstr(
(quicksum(var[i]
for i in mu.index.values)
<= 100),
"Limit100"
)
m.optimize()
for v in m.getVars():
print(v.varName, v.x)
-
正式なコメント
This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?. -
Hi Jonas,
The issue here is that you are effectively asking Gurobi to optimize with respect to an oracle function (the expected value function, which integrates a complicated PDF over bounds that depend on the variables). Gurobi 9.0 can solve mixed-integer problems comprised of linear, (convex or nonconvex) quadratic, or second-order cone constraints/objectives. Unfortunately, this function cannot be explicitly cast in this form, so Gurobi cannot handle it. For more information on what model classes Gurobi is able to solve, see here.
To solve this, you could try:
- Using a piecewise-linear approximation of the expected value function (e.g., Model.addGenConstrPWL() in Python). This is not solving the true problem, but rather an approximation of it.
- Using software for more general nonlinear optimization or "black-box" optimization. These methods may find local solutions to the problem or may have weak guarantees on solution quality.
I hope this helps. Thanks!
Eli
0 -
Hello Eli, would this be true of selecting thresholds in a logistic regression model?
For example, could an indicator function be created to approximate the number of predicted positives\negatives, and could a cost be assigned to each case?
0 -
Hi Rohan,
If I understood your question correctly, you want to:
1. train a binary classifier.
2. control either the accuracy on the positive class (a.k.a. sensitivity), or the accuracy on the negative class (a.k.a. specificity).
You can achieve this by training a Supersparse Linear Integer Model (SLIM), using Mixed-Integer Linear Programming (refer to section "Operational Constraints" >> "Loss Constraints for Imbalanced Data"). The authors of the cited paper enforces integral weights, but you can always drop the integrality constraints and work with real-valued weights to relax the model and -hopefully- make it easier to solve.
Please note that the base algorithm uses the 0-1 loss function to compute the error over the training dataset, which in theory is better than using surrogate loss functions such as the logistic loss. Also, be aware that the resulting MIP model may be difficult to solve to optimality for some datasets. However, one advantage of using mathematical optimization -rather than heuristics or local optimization- is that you can stop the solver after the time limit is reached, recover the current best solution (a.k.a. incumbent) and investigate the quality of that solution by looking at the MIP-gap.
I hope this helps,
Juan Orozco
0
投稿コメントは受け付けていません。
コメント
4件のコメント