AttributeError: Unable to retrieve attribute 'objVal'
AnsweredWe use Gurobi to sove the following problem. First, let me introduce the details. We choose M numbers from 1 to K, and for each combination, calculate the , and finally obtain the sum value of K!/M!/(K-M)! combinations. M numbers form the set
,1 to K form a set
The optimization problem are shown as follows.
The code is shown as follows.
from gurobipy import *
import gurobipy as gp
from numpy import *
import numpy as np
import math
import random
#define global variables
global K
K = 10
global M
M = 5
global ATtheta1
ATtheta1=0.42
global ATtheta2
ATtheta2=2.00
global CPtheta1
CPtheta1=0.11
global CPtheta2
CPtheta2=0.68
global CStheta1
CStheta1=0.05
global CStheta2
CStheta2=0.62
global H
H=100
global Pmax
Pmax=10**(-2)
global N0w
N0w=10**(-8)*10**(-3)
global beta0
beta0=10**(-4)
global threshold
threshold=10**(-3)
global sumcombination
sumcombination=int(math.factorial(K)/math.factorial(M)/math.factorial(K-M))
#The Positions of K users are given randomly
global upos
upos = []
for i in range(0,K):
upos.append([400*random.random(),400*random.random()])
#define distance
def DS(i,j):
return math.sqrt((upos[i][0]-upos[j][0])**2+(upos[i][1]-upos[j][1])**2)
#The maximum value is obtained in order to normalize the distance
global DSset
DSset= []
global DSmax
for i in range(0,K):
for j in range(0,K):
DSset.append(DS(i,j))
DSmax=max(DSset)
#define function
def cov(i,j):
return math.exp((-DS(i,j)/DSmax/ATtheta1)**ATtheta2)
#initialized variables
b0=math.sqrt(Pmax)*np.array(ones((K)))
q0=np.array([250,250])
#create model
model = gp.Model("MyModel2")
#declare variables
q=model.addVars(2, lb=0, ub=400, vtype=GRB.CONTINUOUS, name="q")
a=model.addVars(K, name="a")
v=model.addVars(K, name="v")
r=model.addVars(K, name="r")
t=model.addVars(K, K, name="t")
varpi=model.addVars(sumcombination, name="varpi")
nu=model.addVars(sumcombination, name="nu")
delta=model.addVars(sumcombination, name="delta")
mu=model.addVars(sumcombination, name="mu")
#set objective(The objective function is to choose M numbers from 1 to K, and for each combination, calculate the sum7, and finally obtain the sum value of K!/M!/(K-M)! combinations, which is shown as sumvalue. For how to get the combination, we use the depth-first search approach. In bold below are the optimization variables.
sumvalue=0
for i in range(0, sumcombination):
sumvalue=sumvalue+2*nu[i]-math.log(K**2)-mu[i]
obj=math.factorial(M)*math.factorial(K-M)/math.factorial(K)*sumvalue
model.setObjective(obj, GRB.MAXIMIZE)
#constraint (a)
model.addConstrs(a[k]==(q[0]-upos[k][0])**2+(q[1]-upos[k][1])**2 for k in range(0,K))
#constraint (b)
model.addConstrs(v[k]*(H**2+a[k])/beta0 == 1 for k in range(0,K))
#constraint (c)
for k in range(0,K):
model.addGenConstrPow(v[k], r[k], 0.5)
#constraint (d)
for k in range(0,K):
for k1 in range(0,K):
model.addConstr(t[(k,k1)] == r[k]*r[k1])
#constraint (e,f,g,h)
combination=[]
index=0
def dfs(last_idx, cnt):
global index
if cnt == M:
print(combination)
sum1=0
for k in range(0,K):
for km in range(0,M):
sum1=sum1+(b0[combination[km]]*r[combination[km]]*cov(k,combination[km]))
#constraint (e)
model.addConstr(varpi[index] == sum1)
#constraint (f)
model.addGenConstrLogA(varpi[index], nu[index], 10.0)
sum2=0
for km in range(0,M):
for km1 in range(0,M):
sum2=sum2+(b0[combination[km]]*b0[combination[km1]]*t[(combination[km],combination[km1])]*cov(combination[km],combination[km1]))
#constraint (g)
model.addConstr(delta[index] == sum2+N0w)
#constraint (h)
model.addGenConstrLogA(delta[index], mu[index], 10.0)
index=index+1
return
for idx in range(last_idx+1,K):
combination.append(idx)
dfs(idx, cnt+1)
combination.pop()
dfs(-1,0)
model.params.NonConvex=2
#optimize
model.optimize()
print("Obj: ", model.objVal)
opt=model.objVal
-
The error \(\texttt{Unable to retrieve attribute 'objVal'}\) usually occurs if the solution is not available, for example, because the model is infeasible. Please check the Status before accessing the attribute ObjVal.
To check why your model is infeasible you could use Model.computeIIS(), see also How do I determine why my model is infeasible?
I do not know if this is an issue in your model. But please note that the default lower bound on variables is 0.
0 -
Hi, Marika, thank you for your reply!
apart from the code, is there a problem with my optimization problem itself and is it in a form that can be solved by Gurobi?
Looking forward to your reply! Many thanks!
All the best,
Yali
0 -
Hi, Marika, based on the above details and background introduction, I would like to ask you further questions. This is my original optimization problem, optimization variables are underlined in red.
However, because the objective function contains the log function, auxiliary variables are introduced and variable substitution is performed. The optimizaion problem is reformulatd as folllows.
Auxiliary variables are underlined in green.
What I want to ask is that through the above code, my model status is infeasible. When I add “model.feasRelaxS(0, False, True, False)”, the code is always running and no results are produced. Is the error caused by the reformulation of my problem or the code? Looking forward to your reply and help!Many thanks!
All the best,
Yali
0 -
Hi Yali,
At first glance, the reformulation looks ok. However, if I understand correctly, the objective function only contains constants and can be computed without knowing the values of the variables that you marked red or green. Hence, you only have a feasibility problem to solve. This indeed can be hard to solve.
I assume by
When I add “model.feasRelaxS(0, False, True, False)”, the code is always running"
you mean that it is already hard to find a solution when relaxing the variable bounds?
This is understandable since the problem is still to find values for the variables such that the complex equation is satisfied.
Did you consider relaxing the constraint (instead of the variables)?0 -
Hi, Marika,
What I need to explain is that “the objective function only contains constants and can be computed without knowing the values of the variables that you marked red or green.” is incorrect. For the original optimization problem and the reformulated optimization problem,the objective function is the sum value of
of K!/M!/(K-M)! combinations. We have given the equational expressions of the
including the optimization variables marked red or green under the above two problems. Optimization objectives are not constants. So the problem reformulation is still reasonable and correct? Besides, consider relaxing the constraint (instead of the variables), what code do we need to add?
Looking forward to your reply as soon as possible!Many thanks!
All the best,
Yali
0 -
Hi Yali,
As I said the reformulation above in order to be able to use Gurobi's function constraints looks ok.
Please note that the function constraints are handled via linear approximation which includes a certain error see Constraints - Gurobi Optimization.The function Model.feasRelaxS has four arguments, the third argument indicates whether the variable bounds can be relaxed while the fourth argument indicates whether the constraints can be relaxed. Hence, model.feasRelaxS(0, False, False, True) relaxes the constraints (and model.feasRelaxS(0, False, True, True) relaxes variable bounds and constraints). There is also the function Model.feasRelax() to specify particular variable bounds or constraints to relax.
Cheers,
Marika0 -
Hi, Marika,
Thanks for your answer!As you can see above, we have reformulated the optimization problem,and then the constraints include function constraints. Gurobi will automatically add a piecewise-linear approximation of the function to the model. Therefore, the results will be biased.
However, for the original optimization problem,
If we use the first-order Taylor expansion to directly approximate the two log functions in the objective function, in this case, no function constraints will be introduced in the constraints, and the deviation value of the output result will be generated in the approximation of the objective function.
I would like to ask which of these two options will have smaller errors? Which one would be better?
Looking forward to your reply as soon as possible!Many thanks!
All the best,
Yali
0 -
Hi Yali,
I am sorry, I cannot tell you which approach to prefer.
A general note: Gurobi approximates the function constraints equally over the whole domain of the variable. Gurobi could reduce the domains of variables, by e.g. using bound strengthening in presolve. However, if the range is wide, the approximation can be poor. It can be controlled via the attributes FuncPieces, FuncPieceLength, FuncPieceError. But another approximation that exploits the underlying structure of the problem more efficiently can be preferable.
You probably need to test both approaches and compare computation time and solution quality (for example using model.printQuality() ).Cheers,
Marika0 -
Hi, Marika,
Thanks for your reply! I would like to ask you two more questions.
First, for the following problem, the constraints (c), (f) and (h) can be approximated using piecewise-linear approximation. How about the non-convex quadratic constraints (b) and (d)? Gurobi still will automatically add a piecewise-linear approximation?
Then, in order to make the optimization results more accurate, what values do we need to set for these attributes FuncPieces, FuncPieceLength, FuncPieceError, Is FuncPieces = -1 or -2 better? Is it more useful to set the FunPieceLength and in this case FuncPieces is equal to 1? What's a reasonable value for FuncPieceError? Is it necessary and helpful to set FuncPieceRatio=-1? Hope you can give a preliminary reasonable and valid value for these attributes.
Looking forward to your reply as soon as possible!Many thanks!
All the best,
Yali
0 -
Hi, Marika,In addition to the above published questions, I would like to ask you why I modify model.feasRelaxS(0, False, False, True) to model.feasRelaxS(0, False, True, True), my output results will be different by an order of magnitude. Then, what are the first argument and the second argument indicate?0
-
Hi Yali,
Nonconvex quadratic constraints are handled exactly, see the documentation about Quadratic Constraints. If you want to learn more about the algorithms that are used, I recommend having a look at our Webinar on Non-Convex Quadratic Optimization.
Please have a look at the documentation of the function Model.feasRelaxS. It explains the idea of the feature and all input parameters. Let us know if you have questions about this.
It is not easy to give you values on how to set the parameters or attributes for controlling the error for the piecewise linear approximation. It depends on your problem and what error you are willing to accept. I would recommend starting first by trying to tighten the variable bounds as much as possible. If you have a first solution, you could use this solution to tighten the bounds, similar to what is done in the Gurobi example gc_pwl_func.py.
I would then check the solution for violation, e.g. Gurobi prints a Warning if there is a constraint violation similar toWarning: max constraint violation (5.0565e-05) exceeds tolerance
Warning: max general constraint violation (5.0565e-05) exceeds toleranceThen you could for example set parameter FuncPieces to -2 and incrementally reduce the FuncPieceError (note the default value and the minimum possible value).
You can then also experiment with the other parameters which combination gives a solution of acceptable quality in an acceptable time.Cheers,
Marika0
Please sign in to leave a comment.
Comments
11 comments