• Gurobi Staff

These are some very specific parameters you set. Is there a good reason for setting them?

Could you please share the model? This way, I could have a closer look. Note that uploading files in the Community Forum is not possible but we discuss an alternative in Posting to the Community Forum.

Best regards,
Jaromił

Hi, Jaromił

The main problem I want to solve is the alternating iteration of the following two problems.

The first one is to solve q with fixed b, and the second one is to solve b with fixed q. Optimization variables and auxiliary variables are highlighted in red underline.

my code is shown as follows,

from gurobipy import *

import gurobipy as gp

from numpy import *

import numpy as np

import math

import random

#define global variables

global K

K = 10

global M

M = 4

global ATtheta1

ATtheta1=0.42

global ATtheta2

ATtheta2=2.00

global H

H=100

global Pmax

Pmax=10**(-2)

global N0w

#N0w=10**(-11)  #to avoid large coefficients numerical issue

N0w=0

global beta0

beta0=10**(-4)

global threshold

threshold=10**(-3)

global sumcombination

sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))

#The Positions of K users are given randomly

global upos

upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]

#define distance

def DS(i,j):

return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)

#The maximum value is obtained in order to normalize the distance

global DSset

DSset= []

global DSmax

for i in range(0,K):

for j in range(0,K):

DSset.append(DS(i,j))

DSmax=max(DSset)

#define function

def cov(i,j):

return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))

global deltaz

sum0=0

for k in range(0,K):

for k1 in range(0,K):

sum0=sum0+cov(k,k1)

deltaz=sum0/K/K

#initialized variables

global b0

b0=math.sqrt(0.8)*np.array(ones((K)))

#Obtain the maximum distance to tighten the range of variable a

global DSset_a

DSset_a= []

global DSmax_a

for i in range(0,K):

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)

DSmax_a=max(DSset_a)

global opt

opt=0

for time in range(0,20):

#create model

model1 = gp.Model("MyModel1")

#declare variables

#set objective

sumvalue=0

for i in range(0, sumcombination):

sumvalue=sumvalue+deltaz-nu[i]

obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue

model1.setObjective(obj, GRB.MINIMIZE)

model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")

model1.addConstrs((v[k]*(H**2+a[k]) == 1 for k in range(0,K)), name="c2")

model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")

for k in range(0,K):

for k1 in range(0,K):

combination=[]

index=0

def dfs1(last_idx, cnt):

global index

if cnt == M:

print(combination)

sum1=0

for k in range(0,K):

for km in range(0,M):

sum1=sum1+(b0[combination[km]]*r[combination[km]]*cov(k,combination[km]))

sum2=0

for km in range(0,M):

for km1 in range(0,M):

sum2=sum2+(b0[combination[km]]*b0[combination[km1]]*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))

index=index+1

return

for idx in range(last_idx+1,K):

combination.append(idx)

dfs1(idx, cnt+1)

combination.pop()

dfs1(-1,0)

model1.params.NonConvex=2

#model1.params.NumericFocus=2

model1.params.ScaleFlag=2

model1.params.Aggregate=0

model1.params.MIPFocus=1

model1.params.Heuristics=0

#model1.params.NoRelHeurWork=500

model1.optimize()

model1.printQuality()

print("Obj: ", model1.objVal)

print("q: ", q)

print("a: ", a)

print("v: ", v)

print("r: ", r)

print("t: ", t)

print("nu: ", nu)

for i in range (0,2):

q[i]=q[i].X

#create model

model2 = gp.Model("MyModel2")

#declare variables

#set objective

sumvalue=0

for i in range(0, sumcombination):

sumvalue=sumvalue+deltaz-nu1[i]

obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue

model2.setObjective(obj, GRB.MINIMIZE)

for k in range(0,K):

for k1 in range(0,K):

combination=[]

index=0

def dfs1(last_idx, cnt):

global index

if cnt == M:

print(combination)

sum1=0

for k in range(0,K):

for km in range(0,M):

sum1=sum1+(b[combination[km]]*math.sqrt(1/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*cov(k,combination[km]))

sum2=0

for km in range(0,M):

for km1 in range(0,M):

sum2=sum2+(t1[combination[km],combination[km1]]*math.sqrt(1/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*math.sqrt(1/(H**2+(q[0]-upos[combination[km1]][0])**2+(q[1]-upos[combination[km1]][1])**2))*cov(combination[km],combination[km1]))

index=index+1

return

for idx in range(last_idx+1,K):

combination.append(idx)

dfs1(idx, cnt+1)

combination.pop()

dfs1(-1,0)

model2.params.NonConvex=2

#model2.params.NumericFocus=2

model2.params.ScaleFlag=2

model2.params.Aggregate=0

model2.params.MIPFocus=1

model2.params.Heuristics=0

#model2.params.NoRelHeurTime=500

#model2.params.NoRelHeurWork=10

model2.optimize()

model2.printQuality()

print("Obj: ", model2.objVal)

print("b: ", b)

print("t1: ", t1)

print("nu1: ", nu1)

for i in range (0,K):

b0[i]=b[i].X

def my_abs(x):

if x >= 0:

return x

else:

return -x

if my_abs(model2.objVal-opt) <= threshold:

break

if my_abs(model2.objVal-opt) >= threshold:

opt=model2.objVal

opt=model2.objVal

print("opt: ", opt)

print("q: ", q)

print("b: ", b)

The current problems are as follows,

(1) The running time is quite long. Incumbent is decreasing and BestBd is increasing, but currently Incumbent has been running for a period of time and remains basically unchanged. Only BestBd is continuously and very slowly increasing, resulting in gap not being zero and unable to output results.

(2) The second problem is to some extent simpler than the first probelm, but the runtime is even longer than the first one.

(3) The first problem is to fix b to solve q, based on which, the second problem continues to solve b using the solved q, and the optimization objective will continue to decrease until convergence. But for example, for the second problem, when gap=49% or a certain value, Incumbent does not change much, but BestBd is changing. The value of Incumbent is higher than the optimization goal of the first probelm. Of course, since gap is not yet 0, the final result may also be smaller than the first optimization goal. However, the running time is too long, so may there be other reasons for this deviation? that is to say, the objective value of the second problem is not greater than the objective value of the first problem.

The above is all my questiones, looking forward to your reply and help! Many thanks!

Alll the best,

Yali

• Gurobi Staff

Hi Yali,

There are a few things you could try to improve your model. I noticed that you have variables $$\texttt{a}$$ with very big bounds participating in bilinear terms together with terms with very small bounds, e.g., $$\texttt{v, t}$$. It should help to re-scale your model to improve these bounds. Please refer to our Guidelines for Numerical Issues for more information.

Additionally, you could try to factor out the $$\texttt{nu}$$ variable in these constraints

c5: [ 14.9188163510744 r[0] ^2 + 18.82873058392342 r[0] * r[1]
+ 21.24123481996764 r[0] * r[2] + 17.9234690409059 r[0] * r[3]
+ 5.9408381848008 r[1] ^2 + 13.40406230237633 r[1] * r[2]
+ 11.31042040329848 r[1] * r[3] + 7.560754922834621 r[2] ^2
+ 12.75961194665704 r[2] * r[3] + 5.383314850530639 r[3] ^2
- 80 t[0,0] * nu[0] - 24.04260614960947 t[0,1] * nu[0]
- 49.08850665350337 t[0,2] * nu[0] - 3.28077821776377 t[0,3] * nu[0]
- 24.04260614960947 t[1,0] * nu[0] - 80 t[1,1] * nu[0]
- 4.804610859545798 t[1,2] * nu[0] - 36.33210383735641 t[1,3] * nu[0]
- 49.08850665350337 t[2,0] * nu[0] - 4.804610859545798 t[2,1] * nu[0]
- 80 t[2,2] * nu[0] - 0.7279300634456253 t[2,3] * nu[0]
- 3.28077821776377 t[3,0] * nu[0] - 36.33210383735641 t[3,1] * nu[0]
- 0.7279300634456253 t[3,2] * nu[0] - 80 t[3,3] * nu[0] ] = 0

For this, you would need to introduce a new free variable $$z$$ and and the additional linear equality constraint $$z = \sum c_i t_{i}$$. Then you would have

c5: [ 14.9188163510744 r[0] ^2 + 18.82873058392342 r[0] * r[1]
+ 21.24123481996764 r[0] * r[2] + 17.9234690409059 r[0] * r[3]
+ 5.9408381848008 r[1] ^2 + 13.40406230237633 r[1] * r[2]
+ 11.31042040329848 r[1] * r[3] + 7.560754922834621 r[2] ^2
+ 12.75961194665704 r[2] * r[3] + 5.383314850530639 r[3] ^2
+ nu[0] * z ] = 0
new_con: - z - 80 t[0,0] - 24.04260614960947 t[0,1]
- 49.08850665350337 t[0,2] - 3.28077821776377 t[0,3
- 24.04260614960947 t[1,0] - 80 t[1,1]
- 4.804610859545798 t[1,2] - 36.33210383735641 t[1,3]
- 49.08850665350337 t[2,0] - 4.804610859545798 t[2,1]
- 80 t[2,2] - 0.7279300634456253 t[2,3]
- 3.28077821776377 t[3,0] - 36.33210383735641 t[3,1]
- 0.7279300634456253 t[3,2] - 80 t[3,3] = 0

This is not guaranteed to help but it is usually worth a try.

Best regards,
Jaromił

Hi, Jaromił

My original optimization objective is as follows, with optimization variables highlighted in red underline.

In order to meet the solving requirements of gurobi, we have introduced many auxiliary variables, and the optimization problem is shown as follows, with optimization variables highlighted in green underline.

My code is as follows,

from gurobipy import *

import gurobipy as gp

from numpy import *

import numpy as np

import math

import random

#define global variables

global K

K = 10

global M

M = 4

global ATtheta1

ATtheta1=0.42

global ATtheta2

ATtheta2=2.00

global H

H=100

global Pmax

Pmax=10**(-2)

global N0w

N0w=10**(-11)  #to avoid large coefficients numerical issue

global beta0

beta0=10**(-4)

global sumcombination

sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))

#The Positions of K users are given randomly

global upos

upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]

#define distance

def DS(i,j):

return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)

#The maximum value is obtained in order to normalize the distance

global DSset

DSset= []

global DSmax

for i in range(0,K):

for j in range(0,K):

DSset.append(DS(i,j))

DSmax=max(DSset)

#define function

def cov(i,j):

return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))

global deltaz

sum0=0

for k in range(0,K):

for k1 in range(0,K):

sum0=sum0+cov(k,k1)

deltaz=sum0/K/K

#Obtain the maximum distance to tighten the range of variable a

global DSset_a

DSset_a= []

global DSmax_a

for i in range(0,K):

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)

DSmax_a=max(DSset_a)

#create model

model1 = gp.Model("MyModel1")

#declare variables

#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.

sumvalue=0

for i in range(0, sumcombination):

sumvalue=sumvalue+deltaz-nu[i]

obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue

model1.setObjective(obj, GRB.MINIMIZE)

model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")

model1.addConstrs((v[k]*(H**2+a[k]) == 1 for k in range(0,K)), name="c2")

model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")

for k in range(0,K):

for k in range(0,K):

for k1 in range(0,K):

combination=[]

index=0

def dfs1(last_idx, cnt):

global index

if cnt == M:

print(combination)

sum1=0

for k in range(0,K):

for km in range(0,M):

sum1=sum1+(t[combination[km]]*cov(k,combination[km]))

sum2=0

for km in range(0,M):

for km1 in range(0,M):

sum2=sum2+(t1[combination[km],combination[km1]]*cov(combination[km],combination[km1]))

index=index+1

return

for idx in range(last_idx+1,K):

combination.append(idx)

dfs1(idx, cnt+1)

combination.pop()

dfs1(-1,0)

model1.params.NonConvex=2

model1.params.ScaleFlag=2

model1.params.Aggregate=0

model1.params.MIPFocus=1

model1.params.Heuristics=0

model1.optimize()

model1.printQuality()

print("Obj: ", model1.objVal)

print("q: ", q)

print("b: ", b)

print("a: ", a)

print("v: ", v)

print("r: ", r)

print("t: ", t)

print("t1: ", t1)

print("nu: ", nu)

However, when I output model1.objVal as well as variables q and b, I substitute the values of q and b into the optimization objective, and the resulting solution is different from model1.objVal and has a significant difference, such as obj=0.06, while the theoretical value is 0.08. I want to know what the reason is? Many thanks!

All the best,

Yali

• Gurobi Staff

Hi Yali,

Best regards,
Jaromił

Hi, Jaromił,

I provide the results of the code interruption obj and the output variables q and b.

Set parameter NonConvex to value 2
Set parameter ScaleFlag to value 2
Set parameter Aggregate to value 0
Set parameter MIPFocus to value 1
Set parameter Heuristics to value 0
Gurobi Optimizer version 10.0.3 build v10.0.3rc0 (win64)
CPU model: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz, instruction set [SSE2|AVX|AVX2|AVX512]
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 0 rows, 362 columns and 0 nonzeros
Model fingerprint: 0x6a69b6b9
Coefficient statistics:
Matrix range     [0e+00, 0e+00]
QMatrix range    [3e-03, 1e+02]
QLMatrix range   [1e-05, 1e+04]
Objective range  [5e-03, 5e-03]
Bounds range     [4e-06, 2e+05]
RHS range        [0e+00, 0e+00]
QRHS range       [1e+00, 2e+05]

Continuous model is non-convex -- solving as a MIP
Presolve time: 0.02s
Presolved: 14376 rows, 3779 columns, 48050 nonzeros
Presolved model has 3547 bilinear constraint(s)
Variable types: 3779 continuous, 0 integer (0 binary)
Root relaxation: objective 1.942890e-15, 8271 iterations, 1.00 seconds (0.69 work units)

Solve interrupted
Warning: max constraint violation (3.3261e-04) exceeds tolerance
Best objective 6.416376350063e-02, best bound 8.970230881085e-04, gap 98.6020%

Solution quality statistics for model MyModel1 :
Maximum violation:
Bound       : 0.00000000e+00
Constraint  : 3.32613462e-04 (c2[5])
Integrality : 0.00000000e+00
Obj:  0.0641637635006318
q:  {0: <gurobi.Var q[0] (value 114.0248630273182)>, 1: <gurobi.Var q[1] (value 383.953307075703)>}
b:  {0: <gurobi.Var b[0] (value 0.12234856550114122)>, 1: <gurobi.Var b[1] (value 0.10552744429666128)>, 2: <gurobi.Var b[2] (value 0.14174623233860634)>, 3: <gurobi.Var b[3] (value 0.12295250758295925)>, 4: <gurobi.Var b[4] (value 0.10649007376104985)>, 5: <gurobi.Var b[5] (value 0.12164148028945027)>, 6: <gurobi.Var b[6] (value 0.20998489116213262)>, 7: <gurobi.Var b[7] (value 0.11926354958472472)>, 8: <gurobi.Var b[8] (value 0.19905353107749948)>, 9: <gurobi.Var b[9] (value 0.12135797283014446)>}

Using the obtained q and b, we write the code for the original optimization objective

from numpy import *

import numpy as np

import math

import random

#define global variables

global K

K = 10

global M

M = 4

global ATtheta1

ATtheta1=0.42

global ATtheta2

ATtheta2=2.00

global CPtheta1

CPtheta1=0.11

global CPtheta2

CPtheta2=0.68

global CStheta1

CStheta1=0.05

global CStheta2

CStheta2=0.62

global H

H=100

global N0w

N0w=10**(-11)  #to avoid large coefficients numerical issue

global beta0

beta0=10**(-4)

#The Positions of K users are given randomly

global upos

upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]

#define distance

def DS(i,j):

return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)

#The maximum value is obtained in order to normalize the distance

global DSset

DSset= []

global DSmax

for i in range(0,K):

for j in range(0,K):

DSset.append(DS(i,j))

DSmax=max(DSset)

#define function

def cov(i,j):

return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))

global deltaz

sum0=0

for k in range(0,K):

for k1 in range(0,K):

sum0=sum0+cov(k,k1)

deltaz=sum0/K/K

#initialized variables

global b

b=[0.12234856550114122, 0.10552744429666128, 0.14174623233860634, 0.12295250758295925, 0.10649007376104985, 0.12164148028945027, 0.20998489116213262, 0.11926354958472472, 0.19905353107749948, 0.12135797283014446]

global q

q=[114.0248630273182, 383.953307075703]

theoreticalvalue=0

combination=[]

def dfs1(last_idx, cnt):

global theoreticalvalue

if cnt == M:

#print(combination)

sum1=0

for k in range(0,K):

for km in range(0,M):

sum1=sum1+(b[combination[km]]*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*cov(k,combination[km]))

sum2=0

for km in range(0,M):

for km1 in range(0,M):

sum2=sum2+(b[combination[km]]*b[combination[km1]]*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km]][0])**2+(q[1]-upos[combination[km]][1])**2))*math.sqrt(beta0/(H**2+(q[0]-upos[combination[km1]][0])**2+(q[1]-upos[combination[km1]][1])**2))*cov(combination[km],combination[km1]))

theoreticalvalue=theoreticalvalue+deltaz-(sum1**2/K/K/(sum2+N0w))

return

for idx in range(last_idx+1,K):

combination.append(idx)

dfs1(idx, cnt+1)

combination.pop()

dfs1(-1,0)

theoreticalvalue=theoreticalvalue*math.factorial(M)*math.factorial(K-M)/math.factorial(K)

print("theoreticalvalue: ", theoreticalvalue)

However, the output theoreticalvalue is 0.08576845900534365.

I think that in the code listed in the previous post, I did not make any mistakes in writing the optimization objective and constraints regarding this formulated optimization problem, but I don't know why the obj differs significantly from the actual theoretical value under the same variables q and b.

All the best,

Yali

• Gurobi Staff

Hi Yali,

What you see is very likely the result of the constraint violations of the solution point.

Solve interrupted
Warning: max constraint violation (3.3261e-04) exceeds tolerance
Best objective 6.416376350063e-02, best bound 8.970230881085e-04, gap 98.6020%

Solution quality statistics for model MyModel1 :
Maximum violation:
Bound       : 0.00000000e+00
Constraint  : 3.32613462e-04 (c2[5])
Integrality : 0.00000000e+00

The above output means that constraint $$\texttt{c2[5]}$$ is violated by $$\texttt{3e-4}$$. This can be caused by the settings you are using. I would try running with default settings. You might also want to decrease the value of the FeasibilityTol parameter to, e.g., 1e-7, to decrease the chance of finding a solution point which violates some constraints. Note that it may happen that the solver finds violated solutions if the problem is hard numerically. Setting the NumericFocus parameter might also help.

Note that even if the solution point does not violate the default feasibility tolerance of 1e-6,  exploiting the feasibility tolerance up to 1e-6 may already be enough to construct a solution point with an objective value better than the theoretical optimum, which uses exact arithmetics. In general, the best way to avoid such an issue is to try to re-scale the model. In particular, it is important to not use coefficients which are close to the default feasibility tolerance of 1e-6 and to have a max difference between the coefficients of at most 6 orders of magnitude. I strongly recommend to have a look at our Guidelines for Numerical Issues for more information.

Best regards,
Jaromił

Hi, Jaromił

Thanks for your reply! In gurobi, if I want the solver to output the results when the Incumbent remains unchanged for N times and the Incumbent is not infeasible, instead of waiting until gap=0, how should I set this?

All the best,

Yali

• Gurobi Staff

Hi Yali,

In gurobi, if I want the solver to output the results when the Incumbent remains unchanged for N times and the Incumbent is not infeasible, instead of waiting until gap=0, how should I set this?

For this task, it is best to write a callback. You can use the cbGetSolution method to retrieve the new feasible point whenever one is found. Checking the maximum violation of this point in the callback is however not so easy. One way would be to generate a copy of your model called $$C$$. Once you retrieve a new solution within the callback, you fix the lower and upper bounds of all variables in $$C$$ to the values of the found solution and then optimize $$C$$ in the callback. This should be very fast since everything is fixed. If the optimization status of $$C$$ is OPTIMAL you can retrieve the MaxVio attribute to get the maximum violation of the solution point.
Note that this is not the best approach as usually a feasible solution found by Gurobi will be indeed feasible and will not violate the given tolerances. It is an exception that a found solution violates the given tolerances and mainly happens due to some numerical issues in the model.

Thus, I would propose to use an alternative approach. Here, you can save all feasible points found along the way through the callback. Once some termination criterion is met (see below) you can then use each found feasible point one by one to fix the lower/upper bound of each variable of your model and optimize it. This would work exactly the same as described above but from outside the callback and you could use your original model for this.

In any callback you can always retrieve the current runtime of the optimization via the cbGet method. With this information you can write an own termination criterion within the callback and once it is satisfied you can call the terminate method to terminate the optimization from within a callback gracefully.

I recommend having a look at the callback.py example to get a better grasp of how callbacks work.

Best regards,
Jaromił

Hi，Jaromił

I want to know why the code runs very slowly when solving variable b with fixed variable q, but solving q with fixed variable b is faster？These two variables have similar forms in the optimization objective.
from gurobipy import *

import gurobipy as gp

from numpy import *

import numpy as np

import math

import random

#define global variables

global K

K = 10

global M

M = 4

global ATtheta1

ATtheta1=0.42

global ATtheta2

ATtheta2=2.00

global H

H=100

global Pmax

Pmax=10**(-2)

global N0w

N0w=10**(-11)  #to avoid large coefficients numerical issue

global beta0

beta0=10**(-4)

global sumcombination

sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))

#The Positions of K users are given randomly

global upos

upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]

#define distance

def DS(i,j):

return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)

#The maximum value is obtained in order to normalize the distance

global DSset

DSset= []

global DSmax

for i in range(0,K):

for j in range(0,K):

DSset.append(DS(i,j))

DSmax=max(DSset)

#define function

def cov(i,j):

return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))

global deltaz

sum0=0

for k in range(0,K):

for k1 in range(0,K):

sum0=sum0+cov(k,k1)

deltaz=sum0/K/K

#Obtain the maximum distance to tighten the range of variable a

global DSset_a

DSset_a= []

global DSmax_a

for i in range(0,K):

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)

DSmax_a=max(DSset_a)

#create model

model1 = gp.Model("MyModel1")

#declare variables

#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.

sumvalue=0

for i in range(0, sumcombination):

sumvalue=sumvalue+deltaz-nu[i]

obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue

model1.setObjective(obj, GRB.MINIMIZE)

model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")

model1.addConstrs((v[k]*(H**2+a[k]) == 1 for k in range(0,K)), name="c2")

model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")

for k in range(0,K):

model1.addConstrs((b[k] == x[k]*x[k] for k in range(0,K)), name="c0")

for k in range(0,K):

for k1 in range(0,K):

combination=[]

index=0

def dfs1(last_idx, cnt):

global index

if cnt == M:

sum1=0

for k in range(0,K):

for km in range(0,M):

sum1=sum1+(100*t[combination[km]]*cov(k,combination[km]))

sum2=0

for km in range(0,M):

for km1 in range(0,M):

sum2=sum2+(10000*t1[combination[km],combination[km1]]*cov(combination[km],combination[km1]))

index=index+1

return

for idx in range(last_idx+1,K):

combination.append(idx)

dfs1(idx, cnt+1)

combination.pop()

dfs1(-1,0)

model1.params.NonConvex=2

model1.params.NumericFocus=3

model1.params.FeasibilityTol=1e-7

model1.params.OptimalityTol=1e-7

model1.optimize()

model1.printQuality()

print("Obj: ", model1.objVal)

print("q: ", q)

print("b: ", b)

print("a: ", a)

print("v: ", v)

print("r: ", r)

print("t: ", t)

print("t1: ", t1)

print("nu: ", nu)

All the best,

Yali

• Gurobi Staff

The code you provided results in

Traceback (most recent call last):
File "test.py", line 161, in <module>
model1.addConstrs((b[k] == x[k]*x[k] for k in range(0,K)), name="c0")
File "src/gurobipy/model.pxi", line 3755, in gurobipy.Model.addConstrs
File "test.py", line 161, in <genexpr>
model1.addConstrs((b[k] == x[k]*x[k] for k in range(0,K)), name="c0")
NameError: name 'x' is not defined

I want to know why the code runs very slowly when solving variable b with fixed variable q, but solving q with fixed variable b is faster？These two variables have similar forms in the optimization objective.

The two variables may appear in a similar form in the objective, but there are a lot more b variables so when you fix all b variables you have only 2 original optimization variables

All other variables are auxiliary and can very likely be mostly determined from values of the b variables.

Best regards,
Jaromił

Hi, Jaromił,

the code is shown as follows.

from gurobipy import *

import gurobipy as gp

from numpy import *

import numpy as np

import math

import random

#define global variables

global K

K = 10

global M

M = 4

global ATtheta1

ATtheta1=0.42

global ATtheta2

ATtheta2=2.00

global H

H=100

global Pmax

Pmax=10**(-2)

global N0w

N0w=10**(-11)  #to avoid large coefficients numerical issue

global beta0

beta0=10**(-5)

global sumcombination

sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))

#The Positions of K users are given randomly

global upos

#upos = []

#for i in range(0,K):

#    upos.append([400*random.random(),400*random.random()])

upos=[[103.1791349465347, 245.55998960612703], [255.48098415065635, 320.2532665566555], [64.55202835511899, 144.57501761863028], [379.2670054991101, 260.5062095134453], [28.71787081756203, 327.8530626055657], [14.324474423137357, 288.27329202612844], [206.68794921797164, 10.573469015806047], [60.666225853541754, 279.34522636111916], [312.530530067231, 93.03878739105653], [337.8884265853058, 197.4821008557357]]

#define distance

def DS(i,j):

return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)

#The maximum value is obtained in order to normalize the distance

global DSset

DSset= []

global DSmax

for i in range(0,K):

for j in range(0,K):

DSset.append(DS(i,j))

DSmax=max(DSset)

#define function

def cov(i,j):

return math.exp(-((DS(i,j)/DSmax/ATtheta1)**ATtheta2))

global deltaz

sum0=0

for k in range(0,K):

for k1 in range(0,K):

sum0=sum0+cov(k,k1)

deltaz=sum0/K/K

#initialized variables

global b0

b0=math.sqrt(0.8)*np.array(ones((K)))

#Obtain the maximum distance to tighten the range of variable a

global DSset_a

DSset_a= []

global DSmax_a

for i in range(0,K):

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-0)**2)

DSset_a.append((upos[i][0]-0)**2+(upos[i][1]-400)**2)

DSset_a.append((upos[i][0]-400)**2+(upos[i][1]-400)**2)

DSmax_a=max(DSset_a)

#create model

model1 = gp.Model("MyModel1")

#declare variables

#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.

sumvalue=0

for i in range(0, sumcombination):

sumvalue=sumvalue+deltaz-nu[i]

obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue

model1.setObjective(obj, GRB.MINIMIZE)

model1.addConstrs((a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K)), name="c1")

model1.addConstrs((v[k]*(H**2+a[k]) == b[k]*b[k] for k in range(0,K)), name="c2")

model1.addConstrs((r[k]*r[k] == v[k] for k in range(0,K)), name="c3")

for k in range(0,K):

for k1 in range(0,K):

combination=[]

index=0

def dfs1(last_idx, cnt):

global index

if cnt == M:

sum1=0

for k in range(0,K):

for km in range(0,M):

sum1=sum1+(100*r[combination[km]]*cov(k,combination[km]))

sum2=0

for km in range(0,M):

for km1 in range(0,M):

sum2=sum2+(10000*t[combination[km],combination[km1]]*cov(combination[km],combination[km1]))

index=index+1

return

for idx in range(last_idx+1,K):

combination.append(idx)

dfs1(idx, cnt+1)

combination.pop()

dfs1(-1,0)

model1.params.NonConvex=2

model1.params.NumericFocus=2

model1.params.MIPFocus=1

startValues = b0

for i in range(K):

b[i].start = startValues[i]

model1.optimize()

model1.printQuality()

print("Obj: ", model1.objVal)

print("q: ", q)

print("b: ", b)

print("a: ", a)

print("v: ", v)

print("r: ", r)

print("t: ", t)

print("nu: ", nu)

All the best,

Yali

• Gurobi Staff

Hi Yali,

Could you please point me to the line in your code where you fix b or q? If you mean the lines

for i in range(K):
b[i].start = startValues[i]

then you are not actually fixing b. Please refer to How do I use MIP starts? to understand what the $$\texttt{start}$$ attribute does.

If you want to actually fix a variable, you could set its bounds to a fixed value

for i in range(K):
b[i].lb = startValues[i]
b[i].ub = startValues[i]

Best regards,
Jaromił

Hi, Jaromił

In the above code, we did not fix b or q, but optimized b and q simultaneously. For this code, if we fix b to optimize q, the simulation time will be faster, while fixing q to optimize b will be slower. I just said that there will be such a situation. Of course, the code simulation time for our joint optimization of b and q is still quite long. I don't know the reason for the long simulation time and how to solve it?

All the best,

Yali

• Gurobi Staff

Hi Yali,

I don't know the reason for the long simulation time and how to solve it?

Your model has many complex nonconvex equality constraints. There is not really anything left you can do except to give it more time or simplify your model.

You could try the reformulation I mentioned in my previous comment.

You could for example try relaxing the equality constraints to inequalities if your formulation allows for it. The best way to improve model performance in such cases is to use external knowledge about the application and either fix many variables or tighten the bounds of the variables by a lot.

I am afraid that there is nothing else to try here.

Best regards,
Jaromił