I've been trying to formulate a basic portfolio optimization problem in python where I am looking to maximize the expected portfolio return divided by the portfolio variance by varying the weights of the component stocks.
X is a list of the weights (decision variables A through E), V is the covariance matrix
num = simple sumproduct of X and the expected return of the corresponding stocks
denom = numpy.matmul(numpy.matmul(X,V),X)
When i set the objective to
Obj = num/denom
I get an error "Divisor must be a constant".
I have tried to reformulate this by setting up a constraint with a dummy variable and different variations of 1/denom to get around this but nothing seems to work.
Any ideas would be...much appreciated.
Please sign in to leave a comment.