Solving a problem with a neural network constraint
Awaiting user inputHello Gurobi community,
I am currently using the neural network library fo Gurobi, so i model a simple problem with a non linear constraint. When i try to solve this problem, the solver takes days or maybe week to find a solution.
When i try to tune the solver parameters, especially : model.Params.MIPGap, putting it to 10% (0.1), sometimes it finds a solution fast,
Another thing is that when i add this constraint, the complexity increase !
- model.addConstr(Ebatt[i+1] - Ebatt[i] + Pbattc[i] + Pbattd[i] == 0, name=f"SOC_constraint_{i}")
Do you have any suggestions please ? .
Thank you in advance for your collaboration.
Othmane.
# Function to define the efficiency as a function of power
def true_function(x):
return (
-0.000000000104445269246507000000 * x**6
+ 0.000000027348106526245700000000 * x**5
- 0.000002832176141494910000000000 * x**4
+ 0.000147487958382642000000000000 * x**3
- 0.004057973251849190000000000000 * x**2
+ 0.056073626061942800000000000000 * x
+ 0.659472289427077000000000000000
)
# Generate input data
np.random.seed(142) # For reproducibility
X = np.linspace(0, 75, 10000) # Generate 1000 input values between 0 and 10
y = [true_function(x) for x in X] # Calculate corresponding output values
# Create a Gurobi model for optimization
model = gp.Model("GeneratorOptimization")
model.Params.NonConvex= 2
model.Params.MIPGap = 0.1
# Define decision variables for each hour
num_hours = 24
Pgin = model.addVars(range(num_hours), lb=0, ub=30.0, name="Pgin") # Adjust the upper
Pginreal = model.addVars(range(num_hours), lb=0, ub=30.0, name="Pginreal") # Adjust
Pbattc = model.addVars(dt, lb=-Pmaxch, ub=0, name="Pbattc")
Pgout = model.addVars(dt, lb=-GRB.INFINITY , ub=0, name="Pgout")
Pbattd = model.addVars(dt, lb=0, ub=Pmaxdch, name="Pbattd")
Pbatt = model.addVars(dt, lb=-Pmaxch, ub=Pmaxdch, name="Pbatt")
Ibatt = model.addVars(dt, lb=-Ichmax, ub=Idchmax, name="Ibatt")
Vbatt = model.addVars(dt, lb=Vdchmax, ub=Vchmax, name="Vbatt")
Ebatt = model.addVars(dt+1, lb=Emin, ub=Emax, name="Ebatt")
Pload = pd.read_excel(path, sheet_name="ITN-Load", usecols=["PloadVAR"], nrows= dt).values.reshape(-1)
# Neural network training
nn_model = MLPRegressor(hidden_layer_sizes=(100, 100), max_iter=1000, random_state=42)
nn_model.fit(X.reshape(-1, 1), y) # Train the neural network
# Efficiency prediction using the neural network
def predict_efficiency(generator_power):
return nn_model.predict([[generator_power]])[0]
# Decision variables: Efficiency predicted by the neural network for each hour
efficiency_var = model.addVars(range(num_hours), lb=0.0, ub=1.0, name="Efficiency")
#Objective function:
model.setObjective(sum(Pgin[i]*0.22 for i in range(num_hours)), GRB.MINIMIZE)
for i in range(num_hours):
model.addConstr(Pgin[i]*efficiency_var[i] + Pbatt[i] + Ppv[i] == Pload[i] - Pgout[i], f"Balance_{i}") # Minimum power output
model.addConstr(Pbatt[i] == Ibatt[i] * Vbatt[i] , name=f"ModeleBatterie_constraint_{i}")
# model.addConstr(Pgin[i]*efficiency_var[i] == Pginreal[i], f"MinPgin_{i}") # Minimum power output
model.addConstr(Pbatt[i] == Pbattc[i] + Pbattd[i] , f"Balance_{i}") # Minimum power output
model.addConstr(Ebatt[0] == Einit*SOCmax, name="Final_SOC_constraint")
# if i>0:
model.addConstr(Ebatt[i+1] - Ebatt[i] + Pbattc[i] + Pbattd[i] == 0, name=f"SOC_constraint_{i}")
model.addConstr(Ebatt[i] <= Einit*SOCmax, name="Final_SOC_constraint")
# Use add_predictor_constr to set up the efficiency constraint for each hour
for i in range(num_hours):
pred_constr = add_predictor_constr(model, nn_model, [Pgin[i]], [efficiency_var[i]])
# Solve the optimization problem
model.optimize()
-
Hi Othmane,
Thanks for your question. It's hard to say why it is very slow. Some models with neural network can indeed be very slow and sometimes it is very hard to find a feasible solution.
If you can share the model, it would be easier to have a look.
I had a look at your python code though and the first think that comes to my mind is that it would probably be more efficient to try to formulate the polynomial of true_function directly in Gurobi.
Gurobi can model products of variable and by creating additional variables you could directly model the polynomial in `true_function` What I mean is you can create a variable y = x*x, then to get x^3 you create a variable z=x*y,...
This can introduce some approximation but I think it would be much smaller than the one of the neural network and the model would be more compact.
Best regards,
Pierre
0
Please sign in to leave a comment.
Comments
1 comment