Slow growth of best bound
AnsweredMy code runs as follows,
Set parameter NonConvex to value 2
Set parameter NumericFocus to value 2
Set parameter ScaleFlag to value 2
Set parameter Aggregate to value 0
Set parameter MIPFocus to value 1
Set parameter Heuristics to value 0
Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (win64)
CPU model: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 62 columns and 0 nonzeros
Model fingerprint: 0x16dc4c4e
Model has 54 quadratic constraints
Model has 6 general constraints
Variable types: 62 continuous, 0 integer (0 binary)
Coefficient statistics:
Matrix range [0e+00, 0e+00]
QMatrix range [1e-01, 3e+01]
QLMatrix range [1e+00, 1e+04]
Objective range [2e-01, 2e-01]
Bounds range [4e-06, 2e+05]
RHS range [0e+00, 0e+00]
QRHS range [1e+00, 2e+05]
Presolve added 38 rows and 66 columns
Presolve time: 0.00s
Presolved: 753 rows, 280 columns, 2573 nonzeros
Presolved model has 6 SOS constraint(s)
Presolved model has 179 bilinear constraint(s)
Variable types: 280 continuous, 0 integer (0 binary)
Root relaxation: objective 0.000000e+00, 413 iterations, 0.01 seconds (0.01 work units)
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
0 0 0.00000 0 34 - 0.00000 - - 0s
0 0 0.00000 0 29 - 0.00000 - - 0s
0 0 0.00000 0 27 - 0.00000 - - 0s
0 0 0.00000 0 26 - 0.00000 - - 0s
0 2 0.00000 0 26 - 0.00000 - - 0s
* 4641 1019 47 0.0313675 0.00000 100% 20.1 1s
* 4976 971 88 0.0257858 0.00000 100% 19.6 1s
* 5080 886 73 0.0228217 0.00000 100% 19.4 1s
* 5113 832 87 0.0219939 0.00000 100% 19.4 1s
17188 3685 infeasible 84 0.02199 0.00000 100% 22.0 5s
*27100 5657 76 0.0219621 0.00000 100% 23.5 7s
34468 7232 infeasible 32 0.02196 0.00000 100% 23.8 10s
*40790 8238 85 0.0217257 0.00009 100% 24.1 11s
49434 10052 0.00475 26 74 0.02173 0.00125 94.2% 24.4 15s
*55640 11598 76 0.0216569 0.00183 91.5% 23.8 17s
*56428 11772 78 0.0216569 0.00191 91.2% 23.7 17s
65404 13102 infeasible 79 0.02166 0.00306 85.8% 23.3 20s
*77277 14211 62 0.0215559 0.00452 79.0% 23.0 23s
80421 14396 0.02122 71 54 0.02156 0.00529 75.5% 23.2 25s
*81613 14456 68 0.0215163 0.00542 74.8% 23.2 25s
*84666 14670 61 0.0213946 0.00606 71.7% 23.3 26s
92594 16043 infeasible 30 0.02139 0.00759 64.5% 23.3 30s
*94787 14521 75 0.0208119 0.00801 61.5% 23.4 31s
*94788 14521 75 0.0208119 0.00801 61.5% 23.4 31s
105769 16578 infeasible 34 0.02081 0.01025 50.7% 23.3 35s
......
33418136 2121994 0.01959 89 65 0.02081 0.01958 5.91% 12.6 37097s
33419333 2122035 0.02013 86 37 0.02081 0.01958 5.91% 12.6 37100s
33422904 2122299 0.01958 92 64 0.02081 0.01958 5.91% 12.6 37107s
33425224 2122411 infeasible 81 0.02081 0.01958 5.91% 12.6 37112s
33427567 2122588 cutoff 88 0.02081 0.01958 5.91% 12.6 37116s
33430002 2122711 infeasible 66 0.02081 0.01958 5.91% 12.6 37121s
33432484 2122798 cutoff 87 0.02081 0.01958 5.91% 12.6 37126s
33434756 2122905 0.01958 100 71 0.02081 0.01958 5.91% 12.6 37131s
33437150 2123143 cutoff 92 0.02081 0.01958 5.91% 12.6 37135s
33439598 2123213 0.02013 82 39 0.02081 0.01958 5.91% 12.6 37140s
33443209 2123345 infeasible 78 0.02081 0.01958 5.91% 12.6 37146s
33445652 2123483 0.01959 93 53 0.02081 0.01958 5.91% 12.6 37151s
33447964 2123546 infeasible 85 0.02081 0.01958 5.91% 12.6 37155s
33451621 2123801 infeasible 86 0.02081 0.01958 5.91% 12.6 37162s
33454066 2123933 0.02074 77 31 0.02081 0.01958 5.91% 12.6 37166s
33456478 2124088 cutoff 70 0.02081 0.01958 5.91% 12.6 37171s
33458916 2124129 cutoff 121 0.02081 0.01958 5.91% 12.6 37176s
33461303 2124270 cutoff 81 0.02081 0.01958 5.91% 12.6 37180s
33463668 2124339 0.02051 91 38 0.02081 0.01958 5.91% 12.6 37185s
the Incumbent remains almost unchanged, but the best bound changes very slowly and the code runs for a very long time. I want to know if this situation is normal and how it can be resolved?
-
These are some very specific parameters you set. Is there a good reason for setting them?
Could you please share the model? This way, I could have a closer look. Note that uploading files in the Community Forum is not possible but we discuss an alternative in Posting to the Community Forum.
Best regards,
Jaromił0 -
Hi, Jaromił
The main problem I want to solve is the alternating iteration of the following two problems.
The first one is to solve q with fixed b, and the second one is to solve b with fixed q. Optimization variables and auxiliary variables are highlighted in red underline.
my code is shown as follows,
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 4
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
#N0w=10**(-11) #to avoid large coefficients numerical issue
N0w=0
global beta0
beta0=10**(-4)
global threshold
threshold=10**(-3)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#initialized variables
global b0
b0=math.sqrt(0.8)*np.array(ones((K)))
#Obtain the maximum distance to tighten the range of variable a
global DSset_a
DSset_a= []
global DSmax_a
for i in range(0,K):
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)
DSmax_a=max(DSset_a)
global opt
opt=0
for time in range(0,20):
#create model
model1 = gp.Model("MyModel1")
#declare variables
q=model1.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model1.addVars(K, lb=0, ub=DSmax_a, name="a")
v=model1.addVars(K, lb=1/(H**2+DSmax_a), ub=1/H/H, name="v")
r=model1.addVars(K, lb=math.sqrt(1/(H**2+DSmax_a)), ub=math.sqrt(1/H/H), name="r")
t=model1.addVars(K, K, lb=1/(H**2+DSmax_a), ub=1/H/H, name="t")
nu=model1.addVars(sumcombination, lb=0.1, ub=deltaz, name="nu")
#set objective
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model1.setObjective(obj, GRB.MINIMIZE)
model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")
model1.addConstrs((v[k]*(H**2+a[k]) == 1 for k in range(0,K)), name="c2")
model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")
for k in range(0,K):
for k1 in range(0,K):
model1.addConstr(t[k,k1] == r[k]*r[k1], name="c4")
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b0[combination[km]]*r[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(b0[combination[km]]*b0[combination[km1]]*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
model1.addConstr(sum1**2 == nu[index]*K*K*sum2, name="c5")
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model1.params.NonConvex=2
#model1.params.NumericFocus=2
model1.params.ScaleFlag=2
model1.params.Aggregate=0
model1.params.MIPFocus=1
model1.params.Heuristics=0
#model1.params.NoRelHeurWork=500
model1.optimize()
model1.printQuality()
print("Obj: ", model1.objVal)
print("q: ", q)
print("a: ", a)
print("v: ", v)
print("r: ", r)
print("t: ", t)
print("nu: ", nu)
for i in range (0,2):
q[i]=q[i].X
#create model
model2 = gp.Model("MyModel2")
#declare variables
b=model2.addVars(K, lb=math.sqrt(Pmax), ub=math.sqrt(0.8), name="b")
t1=model2.addVars(K, K, lb=Pmax, ub=0.8, name="t1")
nu1=model2.addVars(sumcombination, lb=0.1, ub=deltaz, name="nu1")
#set objective
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu1[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model2.setObjective(obj, GRB.MINIMIZE)
for k in range(0,K):
for k1 in range(0,K):
model2.addConstr(t1[k,k1] == b[k]*b[k1], name="c6")
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b[combination[km]]*math.sqrt(1/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(t1[combination[km],combination[km1]]*math.sqrt(1/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*math.sqrt(1/(H**2+(q[0]-upos[combination[km1]][0])**2+(q[1]-upos[combination[km1]][1])**2))*cov(combination[km],combination[km1]))
model2.addConstr((sum1**2) == nu1[index]*K*K*(sum2), name="c7")
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model2.params.NonConvex=2
#model2.params.NumericFocus=2
model2.params.ScaleFlag=2
model2.params.Aggregate=0
model2.params.MIPFocus=1
model2.params.Heuristics=0
#model2.params.NoRelHeurTime=500
#model2.params.NoRelHeurWork=10
model2.optimize()
model2.printQuality()
print("Obj: ", model2.objVal)
print("b: ", b)
print("t1: ", t1)
print("nu1: ", nu1)
for i in range (0,K):
b0[i]=b[i].X
def my_abs(x):
if x >= 0:
return x
else:
return -x
if my_abs(model2.objVal-opt) <= threshold:
break
if my_abs(model2.objVal-opt) >= threshold:
opt=model2.objVal
opt=model2.objVal
print("opt: ", opt)
print("q: ", q)
print("b: ", b)The current problems are as follows,
(1) The running time is quite long. Incumbent is decreasing and BestBd is increasing, but currently Incumbent has been running for a period of time and remains basically unchanged. Only BestBd is continuously and very slowly increasing, resulting in gap not being zero and unable to output results.
(2) The second problem is to some extent simpler than the first probelm, but the runtime is even longer than the first one.
(3) The first problem is to fix b to solve q, based on which, the second problem continues to solve b using the solved q, and the optimization objective will continue to decrease until convergence. But for example, for the second problem, when gap=49% or a certain value, Incumbent does not change much, but BestBd is changing. The value of Incumbent is higher than the optimization goal of the first probelm. Of course, since gap is not yet 0, the final result may also be smaller than the first optimization goal. However, the running time is too long, so may there be other reasons for this deviation? that is to say, the objective value of the second problem is not greater than the objective value of the first problem.
The above is all my questiones, looking forward to your reply and help! Many thanks!
Alll the best,
Yali
0 -
Hi Yali,
There are a few things you could try to improve your model. I noticed that you have variables \(\texttt{a}\) with very big bounds participating in bilinear terms together with terms with very small bounds, e.g., \(\texttt{v, t}\). It should help to re-scale your model to improve these bounds. Please refer to our Guidelines for Numerical Issues for more information.
Additionally, you could try to factor out the \(\texttt{nu}\) variable in these constraints
c5: [ 14.9188163510744 r[0] ^2 + 18.82873058392342 r[0] * r[1]
+ 21.24123481996764 r[0] * r[2] + 17.9234690409059 r[0] * r[3]
+ 5.9408381848008 r[1] ^2 + 13.40406230237633 r[1] * r[2]
+ 11.31042040329848 r[1] * r[3] + 7.560754922834621 r[2] ^2
+ 12.75961194665704 r[2] * r[3] + 5.383314850530639 r[3] ^2
- 80 t[0,0] * nu[0] - 24.04260614960947 t[0,1] * nu[0]
- 49.08850665350337 t[0,2] * nu[0] - 3.28077821776377 t[0,3] * nu[0]
- 24.04260614960947 t[1,0] * nu[0] - 80 t[1,1] * nu[0]
- 4.804610859545798 t[1,2] * nu[0] - 36.33210383735641 t[1,3] * nu[0]
- 49.08850665350337 t[2,0] * nu[0] - 4.804610859545798 t[2,1] * nu[0]
- 80 t[2,2] * nu[0] - 0.7279300634456253 t[2,3] * nu[0]
- 3.28077821776377 t[3,0] * nu[0] - 36.33210383735641 t[3,1] * nu[0]
- 0.7279300634456253 t[3,2] * nu[0] - 80 t[3,3] * nu[0] ] = 0For this, you would need to introduce a new free variable \(z\) and and the additional linear equality constraint \(z = \sum c_i t_{i}\). Then you would have
c5: [ 14.9188163510744 r[0] ^2 + 18.82873058392342 r[0] * r[1]
+ 21.24123481996764 r[0] * r[2] + 17.9234690409059 r[0] * r[3]
+ 5.9408381848008 r[1] ^2 + 13.40406230237633 r[1] * r[2]
+ 11.31042040329848 r[1] * r[3] + 7.560754922834621 r[2] ^2
+ 12.75961194665704 r[2] * r[3] + 5.383314850530639 r[3] ^2
+ nu[0] * z ] = 0
new_con: - z - 80 t[0,0] - 24.04260614960947 t[0,1]
- 49.08850665350337 t[0,2] - 3.28077821776377 t[0,3
- 24.04260614960947 t[1,0] - 80 t[1,1]
- 4.804610859545798 t[1,2] - 36.33210383735641 t[1,3]
- 49.08850665350337 t[2,0] - 4.804610859545798 t[2,1]
- 80 t[2,2] - 0.7279300634456253 t[2,3]
- 3.28077821776377 t[3,0] - 36.33210383735641 t[3,1]
- 0.7279300634456253 t[3,2] - 80 t[3,3] = 0This is not guaranteed to help but it is usually worth a try.
Best regards,
Jaromił0 -
Hi, Jaromił
My original optimization objective is as follows, with optimization variables highlighted in red underline.
In order to meet the solving requirements of gurobi, we have introduced many auxiliary variables, and the optimization problem is shown as follows, with optimization variables highlighted in green underline.
My code is as follows,
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 4
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
N0w=10**(-11) #to avoid large coefficients numerical issue
global beta0
beta0=10**(-4)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#Obtain the maximum distance to tighten the range of variable a
global DSset_a
DSset_a= []
global DSmax_a
for i in range(0,K):
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)
DSmax_a=max(DSset_a)
#create model
model1 = gp.Model("MyModel1")
#declare variables
q=model1.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model1.addVars(K, lb=0, ub=DSmax_a, name="a")
v=model1.addVars(K, lb=1/(H**2+DSmax_a), ub=1/H/H, name="v")
r=model1.addVars(K, lb=math.sqrt(1/(H**2+DSmax_a)), ub=math.sqrt(1/H/H), name="r")
b=model1.addVars(K, lb=math.sqrt(Pmax), ub=math.sqrt(0.8), name="b")
t=model1.addVars(K, lb=math.sqrt(Pmax)*math.sqrt(1/(H**2+DSmax_a)), ub=math.sqrt(0.8)*math.sqrt(1/H/H), name="t")
t1=model1.addVars(K, K, lb=K*K*Pmax*(1/(H**2+DSmax_a)), ub=K*K*0.8*(1/H/H), name="t1")
nu=model1.addVars(sumcombination, lb=0, ub=deltaz, name="nu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model1.setObjective(obj, GRB.MINIMIZE)
model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")
model1.addConstrs((v[k]*(H**2+a[k]) == 1 for k in range(0,K)), name="c2")
model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")
for k in range(0,K):
model1.addConstr(t[k] == b[k]*r[k], name="c4")
for k in range(0,K):
for k1 in range(0,K):
model1.addConstr(t1[k,k1] == K*K*t[k]*t[k1], name="c5")
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(t[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(t1[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
model1.addConstr(sum1**2 == nu[index]*(sum2+(N0w/beta0*K*K)), name="c6")
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model1.params.NonConvex=2
model1.params.ScaleFlag=2
model1.params.Aggregate=0
model1.params.MIPFocus=1
model1.params.Heuristics=0
model1.optimize()
model1.printQuality()
print("Obj: ", model1.objVal)
print("q: ", q)
print("b: ", b)
print("a: ", a)
print("v: ", v)
print("r: ", r)
print("t: ", t)
print("t1: ", t1)
print("nu: ", nu)However, when I output model1.objVal as well as variables q and b, I substitute the values of q and b into the optimization objective, and the resulting solution is different from model1.objVal and has a significant difference, such as obj=0.06, while the theoretical value is 0.08. I want to know what the reason is? Many thanks!
All the best,
Yali
0 -
Hi Yali,
Could you please show the log output of your run?
Best regards,
Jaromił0 -
Hi, Jaromił,
I provide the results of the code interruption obj and the output variables q and b.
Set parameter NonConvex to value 2
Set parameter ScaleFlag to value 2
Set parameter Aggregate to value 0
Set parameter MIPFocus to value 1
Set parameter Heuristics to value 0
Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (win64)
CPU model: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 362 columns and 0 nonzeros
Model fingerprint: 0x6a69b6b9
Model has 350 quadratic constraints
Coefficient statistics:
Matrix range [0e+00, 0e+00]
QMatrix range [3e-03, 1e+02]
QLMatrix range [1e-05, 1e+04]
Objective range [5e-03, 5e-03]
Bounds range [4e-06, 2e+05]
RHS range [0e+00, 0e+00]
QRHS range [1e+00, 2e+05]
Continuous model is non-convex -- solving as a MIP
Presolve time: 0.02s
Presolved: 14376 rows, 3779 columns, 48050 nonzeros
Presolved model has 3547 bilinear constraint(s)
Variable types: 3779 continuous, 0 integer (0 binary)
Root relaxation: objective 1.942890e-15, 8271 iterations, 1.00 seconds (0.69 work units)
Solve interrupted
Warning: max constraint violation (3.3261e-04) exceeds tolerance
Best objective 6.416376350063e-02, best bound 8.970230881085e-04, gap 98.6020%
Solution quality statistics for model MyModel1 :
Maximum violation:
Bound : 0.00000000e+00
Constraint : 3.32613462e-04 (c2[5])
Integrality : 0.00000000e+00
Obj: 0.0641637635006318
q: {0: <gurobi.Var q[0] (value 114.0248630273182)>, 1: <gurobi.Var q[1] (value 383.953307075703)>}
b: {0: <gurobi.Var b[0] (value 0.12234856550114122)>, 1: <gurobi.Var b[1] (value 0.10552744429666128)>, 2: <gurobi.Var b[2] (value 0.14174623233860634)>, 3: <gurobi.Var b[3] (value 0.12295250758295925)>, 4: <gurobi.Var b[4] (value 0.10649007376104985)>, 5: <gurobi.Var b[5] (value 0.12164148028945027)>, 6: <gurobi.Var b[6] (value 0.20998489116213262)>, 7: <gurobi.Var b[7] (value 0.11926354958472472)>, 8: <gurobi.Var b[8] (value 0.19905353107749948)>, 9: <gurobi.Var b[9] (value 0.12135797283014446)>}Using the obtained q and b, we write the code for the original optimization objective
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 4
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global CPtheta1
CPtheta1=0.11
global CPtheta2
CPtheta2=0.68
global CStheta1
CStheta1=0.05
global CStheta2
CStheta2=0.62
global H
H=100
global N0w
N0w=10**(-11) #to avoid large coefficients numerical issue
global beta0
beta0=10**(-4)
#The Positions of K users are given randomly
global upos
upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#initialized variables
global b
b=[0.12234856550114122, 0.10552744429666128, 0.14174623233860634, 0.12295250758295925, 0.10649007376104985, 0.12164148028945027, 0.20998489116213262, 0.11926354958472472, 0.19905353107749948, 0.12135797283014446]
global q
q=[114.0248630273182, 383.953307075703]
theoreticalvalue=0
combination=[]
def dfs1(last_idx, cnt):
global theoreticalvalue
if cnt == M:
#print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b[combination[km]]*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(b[combination[km]]*b[combination[km1]]*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km1]][0])**2+(q[1]-upos[combination[km1]][1])**2))*cov(combination[km],combination[km1]))
theoreticalvalue=theoreticalvalue+deltaz-(sum1**2/K/K/(sum2+N0w))
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
theoreticalvalue=theoreticalvalue*math.factorial(M)*math.factorial(K-M)/math.factorial(K)
print("theoreticalvalue: ", theoreticalvalue)However, the output theoreticalvalue is 0.08576845900534365.
I think that in the code listed in the previous post, I did not make any mistakes in writing the optimization objective and constraints regarding this formulated optimization problem, but I don't know why the obj differs significantly from the actual theoretical value under the same variables q and b.
Looking forward to your reply and help!
All the best,
Yali
0 -
Hi Yali,
What you see is very likely the result of the constraint violations of the solution point.
Solve interrupted Warning: max constraint violation (3.3261e-04) exceeds tolerance Best objective 6.416376350063e-02, best bound 8.970230881085e-04, gap 98.6020% Solution quality statistics for model MyModel1 : Maximum violation: Bound : 0.00000000e+00 Constraint : 3.32613462e-04 (c2[5]) Integrality : 0.00000000e+00
The above output means that constraint \(\texttt{c2[5]}\) is violated by \(\texttt{3e-4}\). This can be caused by the settings you are using. I would try running with default settings. You might also want to decrease the value of the FeasibilityTol parameter to, e.g., 1e-7, to decrease the chance of finding a solution point which violates some constraints. Note that it may happen that the solver finds violated solutions if the problem is hard numerically. Setting the NumericFocus parameter might also help.
Note that even if the solution point does not violate the default feasibility tolerance of 1e-6, exploiting the feasibility tolerance up to 1e-6 may already be enough to construct a solution point with an objective value better than the theoretical optimum, which uses exact arithmetics. In general, the best way to avoid such an issue is to try to re-scale the model. In particular, it is important to not use coefficients which are close to the default feasibility tolerance of 1e-6 and to have a max difference between the coefficients of at most 6 orders of magnitude. I strongly recommend to have a look at our Guidelines for Numerical Issues for more information.
Best regards,
Jaromił0 -
Hi, Jaromił
Thanks for your reply! In gurobi, if I want the solver to output the results when the Incumbent remains unchanged for N times and the Incumbent is not infeasible, instead of waiting until gap=0, how should I set this?
Looking forward to your reply and help!
All the best,
Yali
0 -
Hi Yali,
In gurobi, if I want the solver to output the results when the Incumbent remains unchanged for N times and the Incumbent is not infeasible, instead of waiting until gap=0, how should I set this?
For this task, it is best to write a callback. You can use the cbGetSolution method to retrieve the new feasible point whenever one is found. Checking the maximum violation of this point in the callback is however not so easy. One way would be to generate a copy of your model called \(C\). Once you retrieve a new solution within the callback, you fix the lower and upper bounds of all variables in \(C\) to the values of the found solution and then optimize \(C\) in the callback. This should be very fast since everything is fixed. If the optimization status of \(C\) is OPTIMAL you can retrieve the MaxVio attribute to get the maximum violation of the solution point.
Note that this is not the best approach as usually a feasible solution found by Gurobi will be indeed feasible and will not violate the given tolerances. It is an exception that a found solution violates the given tolerances and mainly happens due to some numerical issues in the model.Thus, I would propose to use an alternative approach. Here, you can save all feasible points found along the way through the callback. Once some termination criterion is met (see below) you can then use each found feasible point one by one to fix the lower/upper bound of each variable of your model and optimize it. This would work exactly the same as described above but from outside the callback and you could use your original model for this.
In any callback you can always retrieve the current runtime of the optimization via the cbGet method. With this information you can write an own termination criterion within the callback and once it is satisfied you can call the terminate method to terminate the optimization from within a callback gracefully.
I recommend having a look at the callback.py example to get a better grasp of how callbacks work.
Best regards,
Jaromił0 -
Hi,Jaromił
I want to know why the code runs very slowly when solving variable b with fixed variable q, but solving q with fixed variable b is faster?These two variables have similar forms in the optimization objective.from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 4
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
N0w=10**(-11) #to avoid large coefficients numerical issue
global beta0
beta0=10**(-4)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#Obtain the maximum distance to tighten the range of variable a
global DSset_a
DSset_a= []
global DSmax_a
for i in range(0,K):
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)
DSmax_a=max(DSset_a)
#create model
model1 = gp.Model("MyModel1")
#declare variables
q=model1.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model1.addVars(K, lb=0, ub=DSmax_a, name="a")
v=model1.addVars(K, lb=1/(H**2+DSmax_a), ub=1/H/H, name="v")
r=model1.addVars(K, lb=math.sqrt(1/(H**2+DSmax_a)), ub=math.sqrt(1/H/H), name="r")
b=model1.addVars(K, lb=math.sqrt(Pmax), ub=math.sqrt(0.8), name="b")
t=model1.addVars(K, lb=math.sqrt(Pmax)*math.sqrt(1/(H**2+DSmax_a)), ub=math.sqrt(0.8)*math.sqrt(1/H/H), name="t")
t1=model1.addVars(K, K, lb=K*K*Pmax*(1/(H**2+DSmax_a)), ub=K*K*0.8*(1/H/H), name="t1")
nu=model1.addVars(sumcombination, lb=0, ub=deltaz, name="nu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model1.setObjective(obj, GRB.MINIMIZE)
model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")
model1.addConstrs((v[k]*(H**2+a[k]) == 1 for k in range(0,K)), name="c2")
model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")
for k in range(0,K):
model1.addConstr(t[k] == b[k]*r[k], name="c4")
model1.addConstrs((b[k] == x[k]*x[k] for k in range(0,K)), name="c0")
for k in range(0,K):
for k1 in range(0,K):
model1.addConstr(t1[k,k1] == K*K*t[k]*t[k1], name="c5")
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(100*t[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(10000*t1[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
model1.addConstr(sum1**2 == nu[index]*(sum2+(N0w/beta0*K*K*10000)), name="c6")
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model1.params.NonConvex=2
model1.params.NumericFocus=3
model1.params.FeasibilityTol=1e-7
model1.params.OptimalityTol=1e-7
model1.optimize()
model1.printQuality()
print("Obj: ", model1.objVal)
print("q: ", q)
print("b: ", b)
print("a: ", a)
print("v: ", v)
print("r: ", r)
print("t: ", t)
print("t1: ", t1)
print("nu: ", nu)Looking forward to your reply and help!
All the best,
Yali
0 -
The code you provided results in
Traceback (most recent call last):
File "test.py", line 161, in <module>
model1.addConstrs((b[k] == x[k]*x[k] for k in range(0,K)), name="c0")
File "src/gurobipy/model.pxi", line 3755, in gurobipy.Model.addConstrs
File "test.py", line 161, in <genexpr>
model1.addConstrs((b[k] == x[k]*x[k] for k in range(0,K)), name="c0")
NameError: name 'x' is not definedPlease edit your comment and make sure that your code is reproducible.
I want to know why the code runs very slowly when solving variable b with fixed variable q, but solving q with fixed variable b is faster?These two variables have similar forms in the optimization objective.
The two variables may appear in a similar form in the objective, but there are a lot more b variables so when you fix all b variables you have only 2 original optimization variables
q=model1.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
All other variables are auxiliary and can very likely be mostly determined from values of the b variables.
Best regards,
Jaromił0 -
Hi, Jaromił,
the code is shown as follows.
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 4
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
N0w=10**(-11) #to avoid large coefficients numerical issue
global beta0
beta0=10**(-5)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
#upos = []
#for i in range(0,K):
# upos.append([400*random.random(),400*random.random()])
upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))
global deltaz
sum0=0
for k in range(0,K):
for k1 in range(0,K):
sum0=sum0+cov(k,k1)
deltaz=sum0/K/K
#initialized variables
global b0
b0=math.sqrt(0.8)*np.array(ones((K)))
#Obtain the maximum distance to tighten the range of variable a
global DSset_a
DSset_a= []
global DSmax_a
for i in range(0,K):
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)
DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)
DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)
DSmax_a=max(DSset_a)
#create model
model1 = gp.Model("MyModel1")
#declare variables
q=model1.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model1.addVars(K, lb=0, ub=DSmax_a, name="a")
b=model1.addVars(K, lb=math.sqrt(Pmax), ub=math.sqrt(0.8), name="b")
v=model1.addVars(K, lb=Pmax/(H**2+DSmax_a), ub=0.8/H/H, name="v")
r=model1.addVars(K, lb=math.sqrt(Pmax/(H**2+DSmax_a)), ub=math.sqrt(0.8/H/H), name="r")
t=model1.addVars(K, K, lb=K*K*Pmax*(1/(H**2+DSmax_a)), ub=K*K*0.8*(1/H/H), name="t1")
nu=model1.addVars(sumcombination, lb=0, ub=deltaz, name="nu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+deltaz-nu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model1.setObjective(obj, GRB.MINIMIZE)
model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")
model1.addConstrs((v[k]*(H**2+a[k]) == b[k]*b[k] for k in range(0,K)), name="c2")
model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")
for k in range(0,K):
for k1 in range(0,K):
model1.addConstr(t[k,k1] == K*K*r[k]*r[k1], name="c5")
combination=[]
index=0
def dfs1(last_idx, cnt):
global index
if cnt == M:
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(100*r[combination[km]]*cov(k,combination[km]))
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(10000*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))
model1.addConstr(sum1**2 == nu[index]*(sum2+(N0w/beta0*K*K*10000)), name="c6")
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs1(idx, cnt+1)
combination.pop()
dfs1(-1,0)
model1.params.NonConvex=2
model1.params.NumericFocus=2
model1.params.MIPFocus=1
startValues = b0
for i in range(K):
b[i].start = startValues[i]
model1.optimize()
model1.printQuality()
print("Obj: ", model1.objVal)
print("q: ", q)
print("b: ", b)
print("a: ", a)
print("v: ", v)
print("r: ", r)
print("t: ", t)
print("nu: ", nu)Looking forward to your reply and help!
All the best,
Yali
0 -
Hi Yali,
Could you please point me to the line in your code where you fix b or q? If you mean the lines
for i in range(K):
b[i].start = startValues[i]then you are not actually fixing b. Please refer to How do I use MIP starts? to understand what the \(\texttt{start}\) attribute does.
If you want to actually fix a variable, you could set its bounds to a fixed value
for i in range(K):
b[i].lb = startValues[i]
b[i].ub = startValues[i]Please edit your old comments when changing the code instead of re-posting it again.
Best regards,
Jaromił0 -
Hi, Jaromił
Thanks for your reply!
In the above code, we did not fix b or q, but optimized b and q simultaneously. For this code, if we fix b to optimize q, the simulation time will be faster, while fixing q to optimize b will be slower. I just said that there will be such a situation. Of course, the code simulation time for our joint optimization of b and q is still quite long. I don't know the reason for the long simulation time and how to solve it?
Looking forward to your reply and help!
All the best,
Yali
0 -
Hi Yali,
I don't know the reason for the long simulation time and how to solve it?
Your model has many complex nonconvex equality constraints. There is not really anything left you can do except to give it more time or simplify your model.
You could try the reformulation I mentioned in my previous comment.
You could for example try relaxing the equality constraints to inequalities if your formulation allows for it. The best way to improve model performance in such cases is to use external knowledge about the application and either fix many variables or tighten the bounds of the variables by a lot.
I am afraid that there is nothing else to try here.
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
15 comments