Quadratic constraints contain large coefficient range on linear part/max constraint violation (1.0000e+00) exceeds tolerance
AnsweredWe use Gurobi to sove the following problem. The optimization problem are shown as follows. Optimization variables are underlined in red.
The code is shown as follows.
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
# global K
K = 10
global M
M = 5
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global CPtheta1
CPtheta1=0.11
global CPtheta2
CPtheta2=0.68
global CStheta1
CStheta1=0.05
global CStheta2
CStheta2=0.62
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
N0w=10**(-11)
global beta0
beta0=10**(-4)
global threshold
threshold=10**(-3)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
# global upos
upos = []
for i in range(0,K):
upos.append([400*random.random(),400*random.random()])
#define distance
# def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
# global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
# def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#initialized variables
# b0=math.sqrt(Pmax)*np.array(ones((K)))
q0=np.array([250,250])
#create model
# model = gp.Model("MyModel2")
#declare variables
# q=model.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model.addVars(K, name="a")
v=model.addVars(K, name="v")
r=model.addVars(K, lb=0, name="r")
t=model.addVars(K, K, name="t")
nu=model.addVars(sumcombination, name="nu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
# sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model.setObjective(obj, GRB.MINIMIZE)
model.addConstrs(a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K))
model.addConstrs(v[k]*(H**2+a[k])/beta0 == 1 for k in range(0,K))
#for k in range(0,K):
# #model.addGenConstrPow(v[k], r[k], 0.5, "FuncPieces=-2 FuncPieceError=0.001")
# model.addConstrs(v[k] == (r[k])**2 for k in range(0,K))
for k in range(0,K):
for k1 in range(0,K):
model.addConstr(t[k,k1] == r[k]*r[k1])
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b0[combination[km]]*r[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(b0[combination[km]]*b0[combination[km1]]*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
model.addConstr(sum1**2 == nu[index]*K*K*(sum2+N0w))
model.addConstr(deltaz-nu[index] >= 0)
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model.params.NonConvex=2
model.params.Presolve=0
model.params.NumericFocus=1
model.optimize()
print("Obj: ", model.objVal)
opt=model.objVal
However, the code has the following problems. I sincerely hope that the staff can help me to solve it. Looking forward to your reply! Many thanks!
Changed value of parameter NonConvex to 2
Prev: -1 Min: -1 Max: 2 Default: -1
Changed value of parameter Presolve to 0
Prev: -1 Min: -1 Max: 2 Default: -1
Changed value of parameter NumericFocus to 1
Prev: 0 Min: 0 Max: 3 Default: 0
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 252 rows, 384 columns and 252 nonzeros
Model fingerprint: 0xa46dc3a4
Model has 382 quadratic constraints
Coefficient statistics:
Matrix range [1e+00, 1e+00]
QMatrix range [3e-03, 1e+04]
QLMatrix range [1e-09, 1e+08]
Objective range [4e-03, 4e-03]
Bounds range [4e+02, 4e+02]
RHS range [6e-01, 6e-01]
QRHS range [1e+00, 2e+05]
Warning: Quadratic constraints contain large coefficient range on linear part
Continuous model is non-convex -- solving as a MIP.
Variable types: 6687 continuous, 0 integer (0 binary)
Root relaxation: objective -9.992007e-16, 2 iterations, 0.01 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 -0.00000 0 10 - -0.00000 - - 0s
0 0 -0.00000 0 10 - -0.00000 - - 0s
0 2 -0.00000 0 10 - -0.00000 - - 0s
942 1148 -0.00000 128 7 - -0.00000 - 0.5 6s
2486 2683 -0.00000 341 7 - -0.00000 - 0.2 10s
* 3015 470 388 -0.0000000 -0.00000 0.00% 0.2 11s
Explored 3044 nodes (559 simplex iterations) in 11.08 seconds
Thread count was 8 (of 8 available processors)
Solution count 1: -2.77556e-15
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.0000e+00) exceeds tolerance
Best objective -2.775557561563
All the best,
Yali
-
Hi Yali,
You code snippet is not executable. The are random \(\texttt{return()}\) statements with the function definition being commented-out.
As the warning message indicates, the linear part of your quadratic constraints has a very wide range: it includes coefficients as small as 1e-9 and as big as 1e8.
- Does a coefficient as small as 1e-9 have any meaning in your application? If this is just noise and does not have any meaning, you can consider zeroing these tiny coefficients.
- The default feasibility and optimality tolerance values are 1e-6 and can be tightened as small as 1e-9. It is a good practice to set these tolerances to be at least one order of magnitude smaller than the smallest meaningful coefficient in your model.
- We highly recommended defining tight lower and upper bounds for all the variables participating in non-linear terms.
- You can consider experimenting with parameters such as ScaleFlag. Setting this parameter to value 2 is typically helpful for models with numerical challenges.
- Why did you turn the presolve off? Some presolve procedures, in particular, Aggregate can worsen the numerical challenges. You can experiment with Aggregate=0 if the presolve is indeed troublesome for your model.
- You can consider experimenting with higher values of NumericFocus such as 2 and 3.
- You can use the method Model.printQuality() available in Gurobi Python API to find out the constraint name which has the maximum constraint violation.
The suggestions above would not address the root cause of the problem. It might help to alleviate the numerical challenges of the model. The best approach would be to consider scaling your variables to get the model coefficients to be more reasonable.
Best regards,
Maliheh
0 -
Hi, Maliheh
Thank you very much for your detailed reply!
We use Gurobi to sove the following problem. First, let me introduce the details. We choose M numbers (denoted as the set ) from 1 to K (denoted as the set ), and for each combination, calculate the MSE, and finally obtain the sum value of K!/M!/(K-M)! combinations.
The optimization problem are shown as follows. Optimization variables are underlined in red.
To facilitate the solution, we introduce auxiliary variables. Then, the optimizaion problem is reformulatd as folllows. Optimization variables are underlined in blue.
According to the your comments, we return the noise power spectral density to 0, rather than a very small number. We add "model.params.NumericFocus=2, model.params.FeasibilityTol=10**(-9), model.params.OptimalityTol=10**(-9), model.params.ScaleFlag=2, model.params.Aggregate=0". We have modified the upper and lower bounds of variables, but for variables in constraint (f), we cannot provide tighter upper and lower bounds. Currently, our code and existing issues are as follows.
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 5
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global CPtheta1
CPtheta1=0.11
global CPtheta2
CPtheta2=0.68
global CStheta1
CStheta1=0.05
global CStheta2
CStheta2=0.62
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
#N0w=10**(-11)
N0w=0
global beta0
beta0=10**(-4)
global threshold
threshold=10**(-3)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos = []
for i in range(0,K):
upos.append([400*random.random(),400*random.random()])
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
#
#
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#
#initialized variables
b0=math.sqrt(Pmax)*np.array(ones((K)))
q0=np.array([250,250])
#create model
model = gp.Model("MyModel2")
#declare variables
q=model.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model.addVars(K, lb=0, ub=(400*math.sqrt(2))**2, name="a")
v=model.addVars(K, lb=beta0/(H**2+(400*math.sqrt(2))**2), ub=beta0/H/H, name="v")
r=model.addVars(K, lb=math.sqrt(beta0/(H**2+(400*math.sqrt(2))**2)), ub=math.sqrt(beta0/H/H), name="r")
t=model.addVars(K, K, lb=beta0/(H**2+(400*math.sqrt(2))**2), ub=beta0/H/H, name="t")
#varpi=model.addVars(sumcombination, lb=0, name="varpi")
nu=model.addVars(sumcombination, lb=0, ub=deltaz, name="nu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model.setObjective(obj, GRB.MINIMIZE)
model.addConstrs(a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K))
model.addConstrs(v[k]*(H**2+a[k])/beta0 == 1 for k in range(0,K))
#for k in range(0,K):
#model.addGenConstrPow(v[k], r[k], 0.5, "FuncPieces=-2 FuncPieceError=0.001")
model.addConstrs(v[k] == (r[k])**2 for k in range(0,K))
for k in range(0,K):
for k1 in range(0,K):
model.addConstr(t[k,k1] == r[k]*r[k1])
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b0[combination[km]]*r[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(b0[combination[km]]*b0[combination[km1]]*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
#model.addConstr(varpi[index] == sum1)
#model.addConstr((varpi[index])**2 == nu[index]*K*K*(sum2+N0w))
model.addConstr(sum1**2 == nu[index]*K*K*(sum2+N0w))
#
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model.params.NonConvex=2
#model.params.Presolve=0
model.params.NumericFocus=2
model.params.FeasibilityTol=10**(-9)
model.params.OptimalityTol=10**(-9)
model.params.ScaleFlag=2
model.params.Aggregate=0
#
model.optimize()
print("Obj: ", model.objVal)
print("q: ", q)
opt=model.objVal
model.printQuality()Changed value of parameter NonConvex to 2
Prev: -1 Min: -1 Max: 2 Default: -1
Changed value of parameter NumericFocus to 2
Prev: 0 Min: 0 Max: 3 Default: 0
Changed value of parameter FeasibilityTol to 1e-09
Prev: 1e-06 Min: 1e-09 Max: 0.01 Default: 1e-06
Changed value of parameter OptimalityTol to 1e-09
Prev: 1e-06 Min: 1e-09 Max: 0.01 Default: 1e-06
Changed value of parameter ScaleFlag to 2
Prev: -1 Min: -1 Max: 3 Default: -1
Changed value of parameter Aggregate to 0
Prev: 1 Min: 0 Max: 1 Default: 1
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 384 columns and 0 nonzeros
Model fingerprint: 0x7abb6cb6
Model has 382 quadratic constraints
Coefficient statistics:
Matrix range [0e+00, 0e+00]
QMatrix range [3e-03, 1e+04]
QLMatrix range [1e+00, 1e+08]
Objective range [4e-03, 4e-03]
Bounds range [3e-10, 3e+05]
RHS range [0e+00, 0e+00]
QRHS range [1e+00, 3e+05]
Continuous model is non-convex -- solving as a MIP.
Presolve time: 0.00s
Presolved: 25928 rows, 6686 columns, 61820 nonzeros
Presolved model has 6422 bilinear constraint(s)
Variable types: 6686 continuous, 0 integer (0 binary)
Root relaxation: objective 1.297265e-02, 11453 iterations, 0.77 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.01297 0 10 - 0.01297 - - 0s
0 0 0.01298 0 10 - 0.01298 - - 0s
0 2 0.01298 0 10 - 0.01298 - - 3s
1 4 0.01314 1 12 - 0.01298 - 42634 20s
15 18 0.01314 4 12 - 0.01314 - 3967 25s
71 70 0.01393 10 12 - 0.01314 - 1097 36s
79 76 infeasible 12 - 0.01314 - 1429 55s
93 83 0.01524 13 10 - 0.01314 - 1916 66s
112 94 infeasible 14 - 0.01314 - 1701 81s
135 109 0.01524 16 11 - 0.01314 - 1641 87s
154 125 0.01524 17 11 - 0.01314 - 1591 98s
185 146 infeasible 20 - 0.01314 - 1475 108s
201 157 0.01533 21 12 - 0.01314 - 1499 111s
226 175 infeasible 24 - 0.01314 - 1351 119s
254 185 0.01342 6 10 - 0.01314 - 1299 121s
272 201 infeasible 7 - 0.01314 - 1230 126s
294 216 0.01314 8 328 - 0.01314 - 1230 130s
313 232 0.01314 9 329 - 0.01314 - 1344 151s
341 249 0.01474 10 12 - 0.01314 - 1479 156s
364 265 0.01314 10 11 - 0.01314 - 1468 161s
390 276 infeasible 9 - 0.01314 - 1391 174s
417 284 0.01358 11 10 - 0.01314 - 1479 181s
443 293 0.01395 12 12 - 0.01314 - 1529 220s
472 331 0.01387 12 10 - 0.01314 - 1467 226s
526 345 0.01314 12 11 - 0.01314 - 1524 282s
562 356 0.01340 13 14 - 0.01314 - 1435 286s
601 408 infeasible 14 - 0.01314 - 1353 422s
673 448 infeasible 15 - 0.01314 - 1666 457s
745 473 infeasible 15 - 0.01314 - 1505 461s
814 485 0.01368 18 12 - 0.01314 - 1429 490s
862 493 0.01323 19 11 - 0.01314 - 1363 502s
878 503 0.01364 21 11 - 0.01314 - 1378 534s
896 511 0.01332 23 10 - 0.01314 - 1554 578s
908 516 0.01435 23 14 - 0.01314 - 1569 587s
923 521 0.01330 26 14 - 0.01314 - 1590 607s
942 606 0.01748 33 12 - 0.01314 - 1605 629s
1058 608 0.01748 95 0 - 0.01362 - 1535 676s
1060 610 0.01402 8 0 - 0.01362 - 1532 695s
1075 619 0.01362 17 19 - 0.01362 - 1570 720s
1083 625 0.01362 18 19 - 0.01362 - 1580 766s
1088 629 0.01457 20 16 - 0.01362 - 1637 795s
1094 633 0.01495 20 37 - 0.01362 - 1666 800s
1100 637 0.01362 19 19 - 0.01362 - 1662 855s
1110 640 0.01362 20 19 - 0.01362 - 1772 891s
1116 644 0.01362 21 19 - 0.01362 - 1854 922s
1126 642 0.01362 23 14 - 0.01362 - 1919 934s
1131 658 0.01362 24 12 - 0.01362 - 1921 959s
1159 687 infeasible 25 - 0.01362 - 1922 1019s
1211 690 0.01362 25 376 - 0.01362 - 1973 1080s
1264 704 infeasible 26 - 0.01362 - 2018 1143s
1339 780 0.01380 25 19 - 0.01362 - 2013 1161s
1472 860 0.01389 20 13 - 0.01362 - 1857 1193s
1601 963 0.01409 25 11 - 0.01362 - 1763 1205s
1821 1019 0.01462 44 10 - 0.01362 - 1561 1266s
2050 1028 0.01399 25 11 - 0.01362 - 1429 1295s
* 2057 597 171 0.0259407 0.01362 47.5% 1434 1295s
* 2070 564 177 0.0258932 0.01362 47.4% 1425 1295s
2211 442 0.01362 27 14 0.02589 0.01362 47.4% 1367 1335s
2233 447 0.01440 28 11 0.02589 0.01362 47.4% 1396 1368s
2271 461 0.01489 29 14 0.02589 0.01362 47.4% 1407 1395s
2380 455 infeasible 37 0.02589 0.01362 47.4% 1388 1475s
2474 546 0.01411 44 10 0.02589 0.01362 47.4% 1388 1498s
2665 525 0.01477 81 11 0.02589 0.01362 47.4% 1318 1538s
2825 529 0.01477 133 1 0.02589 0.01362 47.4% 1277 1598s
* 2826 306 134 0.0147692 0.01362 7.81% 1277 1598s
2938 330 cutoff 64 0.01477 0.01362 7.81% 1258 1685s
3036 342 cutoff 63 0.01477 0.01362 7.81% 1273 1720s
3083 426 0.01365 34 10 0.01477 0.01362 7.81% 1300 1812s
3231 434 infeasible 44 0.01477 0.01362 7.81% 1284 1858s
3277 436 cutoff 34 0.01477 0.01362 7.81% 1312 1910s
3349 449 0.01362 24 18 0.01477 0.01362 7.81% 1331 1972s
3410 446 infeasible 29 0.01477 0.01362 7.81% 1355 2031s
3447 449 cutoff 29 0.01477 0.01362 7.81% 1374 2095s
3506 455 0.01362 22 45 0.01477 0.01362 7.81% 1413 2182s
3576 457 0.01362 35 46 0.01477 0.01362 7.81% 1443 2257s
3606 467 cutoff 35 0.01477 0.01362 7.81% 1469 2314s
3654 474 0.01419 46 18 0.01477 0.01362 7.81% 1481 2400s
3691 466 infeasible 32 0.01477 0.01362 7.81% 1525 2444s
3739 486 0.01366 31 14 0.01477 0.01362 7.80% 1595 2594s
3841 504 0.01362 32 489 0.01477 0.01362 7.80% 1593 2664s
3895 516 0.01400 33 19 0.01477 0.01362 7.80% 1609 2707s
3989 522 0.01416 34 20 0.01477 0.01362 7.80% 1601 2840s
4051 519 cutoff 37 0.01477 0.01366 7.54% 1633 2915s
4084 526 infeasible 37 0.01477 0.01372 7.09% 1660 2964s
4183 545 0.01396 29 14 0.01477 0.01372 7.09% 1666 3009s
4300 571 0.01372 27 17 0.01477 0.01372 7.09% 1656 3046s
4422 591 0.01406 23 12 0.01477 0.01372 7.09% 1654 3084s
4518 596 cutoff 27 0.01477 0.01372 7.09% 1664 3153s
4593 602 0.01419 51 18 0.01477 0.01372 7.09% 1675 3212s
4655 626 0.01374 27 14 0.01477 0.01374 6.95% 1695 3289s
4745 653 0.01374 30 15 0.01477 0.01374 6.95% 1709 3364s
4918 660 infeasible 39 0.01477 0.01377 6.80% 1704 3448s
5029 663 0.01377 42 23 0.01477 0.01377 6.80% 1734 3535s
5130 667 cutoff 35 0.01477 0.01377 6.79% 1767 3605s
5218 676 0.01379 24 13 0.01477 0.01379 6.61% 1796 3675s
5365 672 cutoff 34 0.01477 0.01381 6.47% 1793 3766s
5459 669 0.01477 22 13 0.01477 0.01384 6.32% 1836 3975s
5540 668 cutoff 23 0.01477 0.01384 6.32% 1871 4051s
5589 658 cutoff 31 0.01477 0.01388 5.99% 1917 4116s
5625 658 0.01434 27 12 0.01477 0.01389 5.98% 1952 4199s
5731 658 infeasible 41 0.01477 0.01395 5.57% 1967 4277s
5795 661 0.01419 55 18 0.01477 0.01396 5.46% 1995 4338s
5914 667 cutoff 30 0.01477 0.01396 5.46% 2012 4420s
6050 665 infeasible 39 0.01477 0.01397 5.44% 2029 4482s
6174 664 0.01409 27 12 0.01477 0.01397 5.43% 2051 4553s
6281 670 0.01397 29 11 0.01477 0.01397 5.42% 2082 4775s
6403 703 0.01410 43 10 0.01477 0.01397 5.42% 2097 4879s
6586 725 0.01419 55 18 0.01477 0.01399 5.27% 2103 4956s
6824 752 infeasible 32 0.01477 0.01399 5.27% 2103 5048s
7143 844 0.01399 34 12 0.01477 0.01399 5.27% 2075 5235s
7847 1134 0.01401 34 13 0.01477 0.01401 5.11% 1958 5303s
* 8144 506 113 0.0084364 0.00844 0.00% 1911 5303s
Explored 9203 nodes (15696379 simplex iterations) in 5303.68 seconds
Thread count was 8 (of 8 available processors)
Solution count 4: 0.00843635 0.0147692 0.0258932 0.0259407
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (5.4127e-09) exceeds tolerance
Best objective 8.436353171704e-03, best bound 8.436353171704e-03, gap 0.0000%
Obj: 0.008436353171704492
q: {0: <gurobi.Var q[0] (value 267.562984392664)>, 1: <gurobi.Var q[1] (value 55.53033496620341)>}
Solution quality statistics for model MyModel2 :
Maximum violation:
Bound : 0.00000000e+00
Constraint : 5.41270806e-09 (QC250)
Integrality : 0.00000000e+00I want to know if my problem reformulation is correct, how to solve this problem, and how to determine if the output solution is correct.
Looking forward to your reply! Many thanks!
All the best,
Yali
0 -
Your formulation of using auxiliary variables to translate your model into a form which can be implemented using gurobipy looks fine to me.
how to determine if the output solution is correct.
- To some extent, you need to trust that Gurobi provides the optimal solution. You can of course do a number of different sanity checks:
- For a smaller size instance of your model, you can find the optimal solution via enumerating all possible solutions and check that the optimal solution found matches the solution returned by Gurobi.
- If Gurobi reports \(z^*\) as the optimal objective value for a minimization problem, you can try to solve the model again with the additional constraint \( z \leq z^* - \epsilon\). We expect for Gurobi to terminate with an infeasibility status because Gurobi has already concluded that it is not possible to have a solution better than \(z^*\).
Best regards,
Maliheh
0 - To some extent, you need to trust that Gurobi provides the optimal solution. You can of course do a number of different sanity checks:
-
Dear Maliheh,
The optimization problem is as follows, and optimization variables are underlined in blue.
Currently, our code and existing issues are as follows.
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 5
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global CPtheta1
CPtheta1=0.11
global CPtheta2
CPtheta2=0.68
global CStheta1
CStheta1=0.05
global CStheta2
CStheta2=0.62
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
#N0w=10**(-11)
N0w=0
global beta0
beta0=10**(-4)
global threshold
threshold=10**(-3)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos = []
for i in range(0,K):
upos.append([400*random.random(),400*random.random()])
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#initialized variables
b0=math.sqrt(Pmax)*np.array(ones((K)))
q0=np.array([250,250])
#create model
model = gp.Model("MyModel2")
#declare variables
q=model.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model.addVars(K, lb=0, ub=(400*math.sqrt(2))**2, name="a")
v=model.addVars(K, lb=beta0/(H**2+(400*math.sqrt(2))**2), ub=beta0/H/H, name="v")
r=model.addVars(K, lb=math.sqrt(beta0/(H**2+(400*math.sqrt(2))**2)), ub=math.sqrt(beta0/H/H), name="r")
t=model.addVars(K, K, lb=beta0/(H**2+(400*math.sqrt(2))**2), ub=beta0/H/H, name="t")
nu=model.addVars(sumcombination, lb=0, ub=deltaz, name="nu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model.setObjective(obj, GRB.MINIMIZE)
model.addConstrs(a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K))
model.addConstrs(v[k]*(H**2+a[k])/beta0 == 1 for k in range(0,K))
#for k in range(0,K):
# model.addGenConstrPow(v[k], r[k], 0.5, "FuncPieces=-2 FuncPieceError=0.0001")
model.addConstrs(v[k] == (r[k])**2 for k in range(0,K))
for k in range(0,K):
for k1 in range(0,K):
model.addConstr(t[k,k1] == r[k]*r[k1])
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b0[combination[km]]*r[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(b0[combination[km]]*b0[combination[km1]]*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
model.addConstr(sum1**2 == nu[index]*K*K*(sum2+N0w))
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model.params.NonConvex=2
model.params.NumericFocus=3
model.params.FeasibilityTol=1e-9
model.params.OptimalityTol=1e-9
model.params.ScaleFlag=2
model.params.Aggregate=0
model.optimize()
print("Obj: ", model.objVal)
print("q: ", q)
opt=model.objVal
model.printQuality()For the constraint (c), I would like to ask a question. Is it appropriate to write as
for k in range(0,K):
model.addGenConstrPow(v[k], r[k], 0.5, "FuncPieces=-2 FuncPieceError=0.0001")or asmodel.addConstrs(v[k] == (r[k])**2 for k in range(0,K))
(It should be noted that r[k] is already a positive value in the variable definition)The problems with the former writing method are as follows.Changed value of parameter NonConvex to 2
Prev: -1 Min: -1 Max: 2 Default: -1
Changed value of parameter NumericFocus to 3
Prev: 0 Min: 0 Max: 3 Default: 0
Changed value of parameter FeasibilityTol to 1e-09
Prev: 1e-06 Min: 1e-09 Max: 0.01 Default: 1e-06
Changed value of parameter OptimalityTol to 1e-09
Prev: 1e-06 Min: 1e-09 Max: 0.01 Default: 1e-06
Changed value of parameter ScaleFlag to 2
Prev: -1 Min: -1 Max: 3 Default: -1
Changed value of parameter Aggregate to 0
Prev: 1 Min: 0 Max: 1 Default: 1
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 384 columns and 0 nonzeros
Model fingerprint: 0x640229d0
Model has 372 quadratic constraints
Model has 10 general constraints
Variable types: 384 continuous, 0 integer (0 binary)
Coefficient statistics:
Matrix range [0e+00, 0e+00]
QMatrix range [3e-03, 1e+04]
QLMatrix range [1e+00, 1e+08]
Objective range [4e-03, 4e-03]
Bounds range [3e-10, 3e+05]
RHS range [0e+00, 0e+00]
QRHS range [1e+00, 2e+05]
Presolve removed 0 rows and 22 columns
Presolve time: 0.02s
Presolved: 25842 rows, 6662 columns, 61620 nonzeros
Presolved model has 6400 bilinear constraint(s)
Variable types: 6662 continuous, 0 integer (0 binary)
Root relaxation: objective 1.248260e-02, 13165 iterations, 1.81 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.01248 0 15 - 0.01248 - - 1s
H 0 0 0.0251561 0.01248 50.4% - 2s
0 2 0.01248 0 15 0.02516 0.01248 50.4% - 2s
* 14 14 3 0.0130558 0.01304 0.14% 422 4s
Explored 23 nodes (19937 simplex iterations) in 4.89 seconds
Thread count was 8 (of 8 available processors)
Solution count 2: 0.0130558 0.0251561
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (5.1011e-09) exceeds tolerance
Warning: max general constraint violation (3.7379e-09) exceeds tolerance
Best objective 1.305580428846e-02, best bound 1.305580428846e-02, gap 0.0000%
Obj: 0.013055804288459483
q: {0: <gurobi.Var q[0] (value 0.0)>, 1: <gurobi.Var q[1] (value 0.0)>}
Solution quality statistics for model MyModel2 :
Maximum violation:
Bound : 0.00000000e+00
Constraint : 5.10105021e-09 (QC226)
Integrality : 0.00000000e+00I would like to ask you how to solve the two warning issues.Then, the problems with the latter writing method are as follows.Changed value of parameter NonConvex to 2
Prev: -1 Min: -1 Max: 2 Default: -1
Changed value of parameter NumericFocus to 3
Prev: 0 Min: 0 Max: 3 Default: 0
Changed value of parameter FeasibilityTol to 1e-09
Prev: 1e-06 Min: 1e-09 Max: 0.01 Default: 1e-06
Changed value of parameter OptimalityTol to 1e-09
Prev: 1e-06 Min: 1e-09 Max: 0.01 Default: 1e-06
Changed value of parameter ScaleFlag to 2
Prev: -1 Min: -1 Max: 3 Default: -1
Changed value of parameter Aggregate to 0
Prev: 1 Min: 0 Max: 1 Default: 1
Gurobi Optimizer version 9.1.2 build v9.1.2rc0 (win64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 384 columns and 0 nonzeros
Model fingerprint: 0x04e81293
Model has 382 quadratic constraints
Coefficient statistics:
Matrix range [0e+00, 0e+00]
QMatrix range [3e-03, 1e+04]
QLMatrix range [1e+00, 1e+08]
Objective range [4e-03, 4e-03]
Bounds range [3e-10, 3e+05]
RHS range [0e+00, 0e+00]
QRHS range [1e+00, 2e+05]
Continuous model is non-convex -- solving as a MIP.
Presolve time: 0.02s
Presolved: 25928 rows, 6686 columns, 61820 nonzeros
Presolved model has 6422 bilinear constraint(s)
Variable types: 6686 continuous, 0 integer (0 binary)
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
17237 -1.8045301e-02 2.306531e-04 3.255809e+11 5s
21918 1.8956010e-02 3.334482e-04 1.765671e+12 10s
29181 8.9006723e-04 9.741333e-05 0.000000e+00 15s
34863 1.1348627e-02 0.000000e+00 0.000000e+00 18s
Root relaxation: objective 1.134863e-02, 34863 iterations, 17.89 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.01135 0 10 - 0.01135 - - 17s
0 2 0.01135 0 10 - 0.01135 - - 19s
3 6 infeasible 2 - 0.01135 - 86.7 20s
15 14 infeasible 4 - 0.01135 - 132 37s
19 16 0.01165 5 11 - 0.01135 - 2239 44s
23 18 infeasible 5 - 0.01135 - 2305 72s
29 21 infeasible 6 - 0.01135 - 3371 92s
34 25 0.01135 6 20 - 0.01135 - 5219 125s
40 33 0.01135 7 20 - 0.01135 - 5483 207s
48 35 0.01135 8 27 - 0.01135 - 5608 217s
66 54 0.01135 10 20 - 0.01135 - 4301 228s
79 58 0.01135 11 20 - 0.01135 - 3727 234s
93 62 0.01135 12 20 - 0.01135 - 3249 246s
107 83 0.01135 13 10 - 0.01135 - 3080 268s
130 99 0.01155 14 10 - 0.01135 - 2851 272s
152 118 0.01135 15 10 - 0.01135 - 2443 276s
177 142 0.01189 18 16 - 0.01135 - 2174 281s
209 166 0.01135 20 21 - 0.01135 - 1923 301s
241 180 0.01135 22 21 - 0.01135 - 1825 323s
269 207 0.01337 26 12 - 0.01135 - 1828 336s
300 244 0.01424 33 10 - 0.01135 - 1709 358s
337 296 0.01424 41 10 - 0.01135 - 1578 360s
405 363 0.01424 46 10 - 0.01135 - 1319 371s
488 429 0.01424 56 10 - 0.01135 - 1131 380s
574 493 0.01424 69 10 - 0.01135 - 991 394s
* 665 302 60 0.0121132 0.01135 6.31% 888 394s
666 234 cutoff 84 0.01211 0.01135 6.31% 887 409s
705 258 infeasible 11 0.01211 0.01135 6.31% 905 428s
786 270 0.01137 36 10 0.01211 0.01135 6.31% 849 440s
965 334 infeasible 36 0.01211 0.01135 6.31% 727 456s
* 1048 330 56 0.0121093 0.01135 6.28% 673 456s
1076 363 0.01198 63 6 0.01211 0.01135 6.28% 669 464s
1213 446 0.01198 87 6 0.01211 0.01135 6.28% 615 468s
1440 584 0.01204 15 14 0.01211 0.01135 6.28% 523 471s
* 1579 352 60 0.0115466 0.01135 1.71% 478 471s
1660 199 cutoff 16 0.01155 0.01135 1.71% 457 489s
1743 208 0.01135 15 23 0.01155 0.01135 1.71% 460 504s
1782 205 infeasible 19 0.01155 0.01135 1.71% 474 511s
1831 206 cutoff 22 0.01155 0.01135 1.71% 477 518s
1874 220 infeasible 15 0.01155 0.01135 1.71% 480 524s
1910 290 cutoff 25 0.01155 0.01135 1.71% 479 570s
2015 224 cutoff 106 0.01155 0.01135 1.71% 490 600s
2059 225 0.01149 17 0 0.01155 0.01135 1.71% 528 613s
2061 226 0.01149 43 0 0.01155 0.01149 0.47% 528 650s
Explored 2065 nodes (1133934 simplex iterations) in 652.39 seconds
Thread count was 8 (of 8 available processors)
Solution count 3: 0.0115466 0.0121093 0.0121132
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (4.7465e-09) exceeds tolerance
Best objective 1.154663448745e-02, best bound 1.154663448745e-02, gap 0.0000%
Obj: 0.01154663448744575
q: {0: <gurobi.Var q[0] (value 0.01176034045085275)>, 1: <gurobi.Var q[1] (value 238.42679656337583)>}
Solution quality statistics for model MyModel2 :
Maximum violation:
Bound : 0.00000000e+00
Constraint : 4.74646360e-09 (QC316)
Integrality : 0.00000000e+00I would like to ask you how to solve this warning issue.
In summary, I would like to ask you three questions.
(1) How to write the constraint (c)? Both methods should be correct, but the warning issues are different.
(2) For the former writing method, how to deal with "Warning: max constraint violation (5.1011e-09) exceeds tolerance" and "Warning: max general constraint violation (3.7379e-09) exceeds tolerance"
(3) For the latter writing method, how to deal with "Warning: max constraint violation (4.7465e-09) exceeds tolerance"
Looking forward to your reply! Many thanks!
Best regards,
Yali
0 -
Dear Maliheh,
About the questions listed above, we really need and look forward to your comments. Thank you very much!
Best regards,
Yali
0 -
(1) How to write the constraint (c)? Both methods should be correct, but the warning issues are different.
First approach: when a general function constraint such as power function is used to model a nonlinear relationship (square root), Gurobi will automatically add a piecewise-linear approximation of the function to the model.
Second approach: a quadratic equality constraint would be handled directly by Gurobi the as a non-convex quadratic constraint.
(2) For the former writing method, how to deal with "Warning: max constraint violation (5.1011e-09) exceeds tolerance" and "Warning: max general constraint violation (3.7379e-09) exceeds tolerance"
(3) For the latter writing method, how to deal with "Warning: max constraint violation (4.7465e-09) exceeds tolerance
Gurobi solves the model in the scaled presolved space. Both scaling and presolve have implicit effects on the feasibility tolerance value. This can lead to constraint and bound violations beyond the feasibility tolerance value 1e-9 when the solution is mapped back to the unscaled and original problem space. Are violations as small as 5e-9 unacceptable for your problem?
- The log shows that there are variables with bounds as small as 3e-10 in your problem. Does such a small value have any physical meaning in your problem? If not, I would suggest zeroing it out. If it does, you would need to consider scaling your variables.
- You can experiment with lower values for the ScaleFlag parameter but this can increase the runtime.
- For question 2, you can also experiment with parameters/attributes such as FuncPieces, FuncPieceLength, FuncPieceError, FuncPieceRatio which control the accuracy of the piecewise linear approximation. Of course, there is a speed-versus-accuracy tradeoff here. For example, the bigger the FuncPieces parameter is, the more accurate the approximation is at the cost of longer runtime. (See section Function Constraints in General Constraints documentation)
Best regards,
Maliheh
0 -
Hi, Maliheh,
(1) Regarding the "Warning: max constraint violation (4.7465e-09) exceeds tolerance", as the violation is relatively small, if I can accept it, can the output result be guaranteed to be correct? Warnings sometimes do not need to be addressed?
(2) Why did I explicitly add constraints to make the objective function positive, but the output result was negative and there was no error and no warnings reported.
Solution count 4: -7.77156e-16 0.000672095 0.00068015 0.000683127
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (6.4220e-07) exceeds tolerance
Best objective -7.771561172376e-16, best bound -7.771561172376e-16, gap 0.0000%
Obj: -7.771561172376096e-16Looking forward to your reply! Many thanks!
Best regards,
Yali
0 -
Hi, Maliheh,
In addition to the two questions I asked you above, I also want to ask you new questions.
The optimization problem is shown as follows. Optimization variables are underlined in red.
To facilitate the solution, we introduce auxiliary variables. Then, the optimizaion problem is reformulatd as folllows. Optimization variables are underlined in blue.
My code is as follows,
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 5
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global CPtheta1
CPtheta1=0.11
global CPtheta2
CPtheta2=0.68
global CStheta1
CStheta1=0.05
global CStheta2
CStheta2=0.62
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
#N0w=10**(-11)
N0w=0
global beta0
beta0=10**(-4)
global threshold
threshold=10**(-3)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos = []
for i in range(0,K):
upos.append([400*random.random(),400*random.random()])
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
q=np.array([250,250])
#create model
model2 = gp.Model("MyModel2")
#declare variables
b=model2.addVars(K, lb=0, ub=math.sqrt(Pmax), name="b")
t1=model2.addVars(K, K, lb=0, ub=Pmax, name="t1")
nu1=model2.addVars(sumcombination, lb=0, ub=deltaz, name="nu1")
#set objective
obj = LinExpr(0)
for i in range(sumcombination):
obj.addTerms(-math.factorial(M)*math.factorial(K-M)/math.factorial(K), nu1[i])
model2.setObjective(obj+deltaz, GRB.MINIMIZE)
for k in range(0,K):
for k1 in range(0,K):
model2.addConstr(t1[k,k1] == b[k]*b[k1])
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b[combination[km]]*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(t1[combination[km],combination[km1]]*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km1]][0])**2+(q[1]-upos[combination[km1]][1])**2))*cov(combination[km],combination[km1]))
model2.addConstr(sum1**2 == nu1[index]*K*K*(sum2+N0w))
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model2.params.NonConvex=2
model2.params.NumericFocus=2
model2.params.FeasibilityTol=1e-9
model2.params.OptimalityTol=1e-9
model2.params.ScaleFlag=0
model2.params.Aggregate=0
model2.optimize()
print("Obj: ", model2.objVal)
print("b: ", b)
model2.printQuality()The output results are as follows,
Set parameter NonConvex to value 2
Set parameter NumericFocus to value 2
Set parameter FeasibilityTol to value 1e-09
Set parameter OptimalityTol to value 1e-09
Set parameter ScaleFlag to value 0
Set parameter Aggregate to value 0
Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (win64)
CPU model: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 362 columns and 0 nonzeros
Model fingerprint: 0x48473f75
Model has 352 quadratic constraints
Coefficient statistics:
Matrix range [0e+00, 0e+00]
QMatrix range [7e-10, 1e+00]
QLMatrix range [1e+00, 1e+00]
Objective range [4e-03, 4e-03]
Bounds range [1e-02, 4e-01]
RHS range [0e+00, 0e+00]
Continuous model is non-convex -- solving as a MIP
Found heuristic solution: objective 0.3558289
Presolve time: 0.00s
Presolved: 25842 rows, 6662 columns, 61250 nonzeros
Presolved model has 6400 bilinear constraint(s)
Variable types: 6662 continuous, 0 integer (0 binary)
Root relaxation: objective 5.551115e-17, 0 iterations, 0.01 seconds (0.00 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 -0.0000000 -0.00000 0.00% - 0s
Explored 1 nodes (0 simplex iterations) in 0.13 seconds (0.06 work units)
Thread count was 8 (of 8 available processors)
Solution count 2: -1.11022e-16 0.355829
Optimal solution found (tolerance 1.00e-04)
Best objective -1.665334536938e-16, best bound -1.665334536938e-16, gap 0.0000%
Obj: -1.6653345369377348e-16
b: {0: <gurobi.Var b[0] (value 0.0)>, 1: <gurobi.Var b[1] (value 0.0)>, 2: <gurobi.Var b[2] (value 0.0)>, 3: <gurobi.Var b[3] (value 0.0)>, 4: <gurobi.Var b[4] (value 0.0)>, 5: <gurobi.Var b[5] (value 0.0)>, 6: <gurobi.Var b[6] (value 0.0)>, 7: <gurobi.Var b[7] (value 0.0)>, 8: <gurobi.Var b[8] (value 0.0)>, 9: <gurobi.Var b[9] (value 0.0)>}
Solution quality statistics for model MyModel2 :
Maximum violation:
Bound : 0.00000000e+00
Constraint : 0.00000000e+00
Integrality : 0.00000000e+00Firstly, the variable “b” should be greater than 0, not equal to 0. The result is incorrect.
Secondly, according to the current output, when b is equal to 0, the result should be 0.355829 in "Solution count 2: -1.11022e-16 0.355829", rather than the current optimal solution.
Thirdly, for the optimization variable, as it is greater than 0, when we set "lb" to 1e-9 rather than 0, the output result is still similar, and the problem still exists.
Therefore, I would like to ask you two questions about the above post and three questions about this post. Looking forward to your reply! Many thanks!
Best regards,
Yali
0 -
Hi, Maliheh,
In addition to the above questions, I have accumulated additional questions to ask you. How to scaling the variables? For example, when I multiply a variable "b" with a very small lower bound 3e-10 by 1e5, i.e., the lower bound of "b" is 3e-5. Then, I need to divide the "b" by 1e5 in the code to ensure that the output result does not change? Looking forward to your comments to the questions in the three posts mentioned above.
Best regards,
Yali
0 -
(1) Regarding the "Warning: max constraint violation (4.7465e-09) exceeds tolerance", as the violation is relatively small, if I can accept it, can the output result be guaranteed to be correct? Warnings sometimes do not need to be addressed?
Gurobi is a floating-point based MIP solver because computers represent real numbers within finite precision. Imprecise arithmetic results in round-off errors. To deal with this reality, the tolerance values need to be introduced to allow for potential acceptance of a solution with a violation of at most \(\epsilon\) being equal to the feasibility tolerance. Whether a violation as small as 4.7465e-09 is acceptable or not is a question that needs to be answered by the user of the model.
(2) Why did I explicitly add constraints to make the objective function positive, but the output result was negative and there was no error and no warnings reported.
Firstly, the variable “b” should be greater than 0, not equal to 0. The result is incorrect.
- It is not possible to have strict inequality constraints in Gurobi (see "Why doesn't Gurobi support strict inequality constraints like x < a?" for more information).
- The variable \(b\) is defined with lower bound 0 in your model. So, it can take value 0.
- I do not see any constraint that enforces the objective to be strictly positive in your model. Furthermore, any constraint in the form \( Ax \geq b\) is considered satisfied if \(Ax - b \geq -\mbox{FeasibilityTol}\).
Secondly, according to the current output, when b is equal to 0, the result should be 0.355829 in "Solution count 2: -1.11022e-16 0.355829", rather than the current optimal solution.
- If all \(b\) decision variables are zero, according to constraints \((b)\), all decision variables \(t\) are also zero.
- According to constraints \((c)\), you then have \(0 = 0 + K^2 \sigma_w^2 \nu_{\mathcal{I}}\). Since the \(\sigma_w^2\) parameter is set to 0, you then have \(0 = 0\). This means that the variable \(\nu_{\mathcal{I}}\) can take any value within its bounds without violating any constraints. To minimize the objective function, the optimal value for the \(\nu_{\mathcal{I}}\) variable equals to its upper bound \(\mbox{deltaz}\), resulting in an optimal objective value of 0. If you would like to force \(\nu_{\mathcal{I}}\) to equal 0 if \(b\) and \(t\) variables are 0, you need to set the \(\sigma_w^2\) parameter to a value strictly greater than 0 (few orders of magnitude larger than the feasibility tolerance).
Thirdly, for the optimization variable, as it is greater than 0, when we set "lb" to 1e-9 rather than 0, the output result is still similar, and the problem still exists.
Defining a lower bound for a variable essentially means having \(x \geq l\). The lower bound of a variable is considered to be respected as long as \(x - l \geq - \mbox{FeasibilityTol}\).
For example, when I multiply a variable "b" with a very small lower bound 3e-10 by 1e5, i.e., the lower bound of "b" is 3e-5. Then, I need to divide the "b" by 1e5 in the code to ensure that the output result does not change?
Let us assume that you would like to scale the variable \(x\) as \(x^{\prime} = 10^5 x\). You need to replace the variable \(x\) by \(10^{-5} x^{\prime}\). The lower and upper bounds of the variable \(x^{\prime}\) should be defined by multiplying the lower and upper bounds of the variable \(x\) by \(10^5\).
Best regards,
Maliheh
0 -
Hi, Maliheh,
Thank you for your reply! Regarding your comments, I would like to ask a few questions.
(1) It is not possible to have strict inequality constraints in Gurobi, if the variable "b>0" not "b>=0", how should we express it? Is it represented in the lower bound or in the constraint?
(2) "If you would like to force νI to equal 0 if b and t variables are 0, you need to set the σw2 parameter to a value strictly greater than 0 (few orders of magnitude larger than the feasibility tolerance)." the σw2 parameter is noise, It should have been taken as 1e-11, but due to numerical issues caused by being too small, it was changed to 0. Can it be arbitrarily changed to a number which is few orders of magnitude larger than the feasibility tolerance, such as 1e-8?
(3) How to scaling the variables is effctive?
(4) How to solve the waring "Warning: 1 variables dropped from basis
Warning: switch to quad precision"?Looking forward to your comments.
Best regards,
Yali
0 -
(1) It is not possible to have strict inequality constraints in Gurobi, if the variable "b>0" not "b>=0", how should we express it? Is it represented in the lower bound or in the constraint?
To model \(x > 0\), you need to model \(x \geq \epsilon\) either as a constraint or set the lower bound of the variable \(x\) to \(\epsilon\). The value of \(\epsilon\) depends on your application and it should be at least one order of magnitude larger than the value of the FeasibilityTol parameter.
(2) "If you would like to force νI to equal 0 if b and t variables are 0, you need to set the σw2 parameter to a value strictly greater than 0 (few orders of magnitude larger than the feasibility tolerance)." the σw2 parameter is noise, It should have been taken as 1e-11, but due to numerical issues caused by being too small, it was changed to 0. Can it be arbitrarily changed to a number which is few orders of magnitude larger than the feasibility tolerance, such as 1e-8?
I was just explaining why the optimal objective function was zero and not the value you expected to see.
- Ideally, the model should not include any parameter/coefficient smaller than the feasibility tolerance value. The rule of thumb is for the smallest coefficient to be at least one order of magnitude bigger.
(3) How to scaling the variables is effctive?
One source of numerical issues in the model is bad numbers such as large matrix coefficient range, large objective coefficient values, and large RHS values. Scaling the variables/constraints helps to get reasonable coefficients and address this source of numerical issues in the model.
(4) How to solve the waring "Warning: 1 variables dropped from basis
Warning: switch to quad precision"?I did not see such warnings in your log. These warning messages indicate that the basis matrix got near singular. The Gurobi Optimizer then replaced one of the structural variables by a slack or switched to a higher precision to address the singularity of the basis matrix.
Hope this concludes all your questions.
Best regards,
Maliheh
0 -
Hi, Maliheh,My optimization problem is as follows, and optimization variables are underlined in red.First, because the lower bound of my variables v and t can reach 1e-10, which is smaller than the "model1.params.FeasibilityTol=1e-9", I scale the variables as
v=model1.addVars(K, lb=(10**3)*beta0/(H**2+DSmax_a), ub=(10**3)*beta0/H/H, name="v")
t=model1.addVars(K, K, lb=(10**3)*beta0/(H**2+DSmax_a), ub=(10**3)*beta0/H/H, name="t")and then the corresponding constraints are changed tomodel1.addConstrs((v[k]*(H**2+a[k])/beta0 == 1*(10**(3)) for k in range(0,K)), name="c2")
model1.addConstrs(v[k] == (10**3)*r[k]*r[k] for k in range(0,K))
for k in range(0,K):
for k1 in range(0,K):
model1.addConstr(t[k,k1] == (10**3)*r[k]*r[k1], name="c3")Second, in constraint (e), since the sum1 and sum2 have orders of magnitude 1e-5 and 1e-10, respectively, I multiply both sides of the equation by 1e3.My complete code is as follows,
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 5
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
#N0w=10**(-11) #to avoid large coefficients numerical issue
N0w=0
global beta0
beta0=10**(-4)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#initialized variables
b0=math.sqrt(Pmax)*np.array(ones((K)))
#Obtain the maximum distance to tighten the range of variable a
global DSset_a
DSset_a= []
global DSmax_a
for i in range(0,K):
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)
DSmax_a=max(DSset_a)
#create model
model1 = gp.Model("MyModel1")
#declare variables
q=model1.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model1.addVars(K, lb=0, ub=DSmax_a, name="a")
v=model1.addVars(K, lb=(10**3)*beta0/(H**2+DSmax_a), ub=(10**3)*beta0/H/H, name="v")
r=model1.addVars(K, lb=math.sqrt(beta0/(H**2+DSmax_a)), ub=math.sqrt(beta0/H/H), name="r")
t=model1.addVars(K, K, lb=(10**3)*beta0/(H**2+DSmax_a), ub=(10**3)*beta0/H/H, name="t")
nu=model1.addVars(sumcombination, lb=0, ub=deltaz, name="nu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model1.setObjective(obj, GRB.MINIMIZE)
model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")
model1.addConstrs((v[k]*(H**2+a[k])/beta0 == 1*(10**(3)) for k in range(0,K)), name="c2")
model1.addConstrs(v[k] == (10**3)*r[k]*r[k] for k in range(0,K))
for k in range(0,K):
for k1 in range(0,K):
model1.addConstr(t[k,k1] == (10**3)*r[k]*r[k1], name="c3")
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b0[combination[km]]*r[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(b0[combination[km]]*b0[combination[km1]]*(10**(-3))*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
model1.addConstr((sum1**2)*(10**3) == nu[index]*K*K*(sum2+N0w)*(10**3), name="c4")
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model1.params.NonConvex=2
model1.params.NumericFocus=2
model1.params.FeasibilityTol=1e-9
model1.params.OptimalityTol=1e-9
model1.params.ScaleFlag=2
model1.params.Aggregate=0
model1.optimize()
print("Obj: ", model1.objVal)
print("q: ", q)
model1.printQuality()Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (win64)
CPU model: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 384 columns and 0 nonzeros
Model fingerprint: 0x1643e273
Model has 382 quadratic constraints
Coefficient statistics:
Matrix range [0e+00, 0e+00]
QMatrix range [3e-03, 1e+04]
QLMatrix range [1e+00, 1e+08]
Objective range [4e-03, 4e-03]
Bounds range [4e-07, 2e+05]
RHS range [0e+00, 0e+00]
QRHS range [1e+03, 2e+05]
Continuous model is non-convex -- solving as a MIP
Presolve time: 0.01s
Presolved: 26138 rows, 6741 columns, 87620 nonzeros
Presolved model has 6477 bilinear constraint(s)
Variable types: 6741 continuous, 0 integer (0 binary)
Root relaxation: objective 9.714181e-04, 24208 iterations, 2.44 seconds (5.82 work units)However, the code runs continuously, and the simulation time is very long and it is difficult to output results. I want to know what the reason is and how to solve it.Looking forward to your comments.
Best regards,
Yali
0 -
Hi, Maliheh,
Connect to the previous post, Although scaling variables and "multiply both sides of the equation by 1e3" can avoid some warnings, they greatly prolong the simulation time, resulting in no results at all.
Looking forward to your comments.
Best regards,
Yali
0 -
The difficulty of the new-version of your model is not related to scaling. It is because some of the input data to the model has changed, for example, the values of \(\texttt{upos}\) parameters have changed. Even if you remove the scaling, it still takes time to find a feasible solution.
Please note that tightening the FeasibilityTol/OptimalityTol might come at the cost of a large number of iterations. I would suggest experimenting with longer TimeLimit and parameters such as MIPFocus=1, Heuristics, and NoRelHeurTime. You might also want to experiment with the Model.feasRelaxS() to figure out relaxing which variable bounds results in a feasible solution.
If you need any further help with the performance improvement of your model, please open a new post.
Best regards,
Maliheh
0
Please sign in to leave a comment.
Comments
15 comments