• Gurobi Staff

Hi Marcell,

We have an optimod on this:

Cheers,
David

Hi David,

The problem is I have 482 assets and I have these constraints :
how many assets should be in the portfolio,
how much should be the minimum and the maximum weight can be assigned to one asset,
weights should add up to one,
the returns * weights should be higher or equal to the target return.

the problem that in the optimod is just doing this on the whole dataset and there is no cardinality constraints but in my case, I need to do an optimization where I have to choose those assets that maximise the shape ratio while not breaking these constraints.

Cheers,

Marcell

• Gurobi Staff

Constraints in your original model (expressed in terms of variables $$x$$ representing portfolio weight) can be translated to the OptiMod model's variable space (reformulated to use variables $$y$$) using the relationship $$x_i = y_i / \sum_{j=1}^n y_j$$ for all assets $$i$$. For example, building off of the Sharpe ratio OptiMod model code:

z = model.addMVar(mu.size, vtype=GRB.BINARY, name="z")# Variable ysum equal to sum of y variablesysum = model.addMVar(shape=(1,), name="ysum")model.addConstr(ysum == y.sum(), name="set_ysum")# Minimum/maximum holdings# lb * z <= x <= ub * z# lb * z <= y / ysum <= ub * z# lb * z * ysum <= y <= ub * z * ysummodel.addConstr(0.03 * z * ysum <= y, name="minimum_holding")model.addConstr(y <= 0.43 * z * ysum, name="maximum_holding")# Cardinality constraintmodel.addConstr(z.sum() == 3, name="cardinality_constraint")# Target return# mu @ x >= r# mu @ (y / ysum) >= r# mu @ y >= r * ysumr = np.mean(mu)model.addConstr(mu @ y >= r * ysum, name="target_return_rate")
• Gurobi Staff

Thanks, I see the difference now.
Can you not use the same reformulation trick (as in the drop down "Mathematical Model")?
And re-write the constraints to use $$y$$ variables instead?

Cheers,
David

Dear David,

So if I understand correctly I should just put this part:

z = model.addMVar(mu.size, vtype=GRB.BINARY, name="z")

# Variable ysum equal to sum of y variables
ysum = model.addMVar(shape=(1,), name="ysum")
model.addConstr(ysum == y.sum(), name="set_ysum")

# Minimum/maximum holdings
# lb * z <= x <= ub * z
# lb * z <= y / ysum <= ub * z
# lb * z * ysum <= y <= ub * z * ysum
model.addConstr(0.03 * z * ysum <= y, name="minimum_holding")
model.addConstr(y <= 0.43 * z * ysum, name="maximum_holding")

# Cardinality constraint
model.addConstr(z.sum() == 3, name="cardinality_constraint")

# Target return
# mu @ x >= r
# mu @ (y / ysum) >= r
# mu @ y >= r * ysum
r = np.mean(mu)
model.addConstr(mu @ y >= r * ysum, name="target_return_rate")

into the function like this and it should work right?:

import mathfrom dataclasses import dataclassfrom typing import Unionimport gurobipy as gpimport numpy as npimport pandas as pdfrom gurobipy import GRBfrom gurobi_optimods.utils import optimod@optimod()def max_sharpe_ratio(cov_matrix, mu, rf_rate=0, *, create_env):    """    Solve the problem of finding a portfolio that maximizes the    Sharpe ratio.    Parameters    ----------    cov_matrix : ndarray or DataFrame        2D positive-semidefinite variance-covariance matrix :math:\Sigma    mu : ndarray or Series        Expected return rates :math:\mu    rf_rate : float >= 0, optional        Non-negative risk-free rate of return (defaults to 0)    Returns    -------    result : SharpeRatioResult        A data class representing the portfolio that maximizes the Sharpe        ratio:        * result.x: The relative portfolio allocations :math:x        * result.sharpe_ratio: The Sharpe ratio of the portfolio        * result.ret: The estimated return :math:\mu^T x of the portfolio        * result.risk: The estimated risk :math:x^T \Sigma x of the          portfolio    """    indices = None    if isinstance(cov_matrix, pd.DataFrame):        indices = cov_matrix.index        cov_matrix = cov_matrix.to_numpy()    elif not isinstance(cov_matrix, np.ndarray):        raise TypeError(f"Unknown covariance matrix type: {type(cov_matrix)}")    if cov_matrix.ndim != 2:        raise ValueError(            f"Covariance matrix should be in 2 dimensions, not {cov_matrix.ndim}"        )    if isinstance(mu, pd.Series):        if indices is None:            indices = mu.index        elif not mu.index.equals(indices):            raise ValueError("Indices of cov_matrix and mu are misaligned")        mu = mu.to_numpy()    elif not isinstance(mu, np.ndarray):        raise TypeError(f"Unknown return rates type: {type(mu)}")    if mu.ndim != 1:        raise ValueError(f"Return rates should be in 1 dimension, not {mu.ndim}")    if not isinstance(rf_rate, float) and not isinstance(rf_rate, int):        raise TypeError(f"Unknown risk-free return rate type: {type(rf_rate)}")    elif rf_rate < 0:        raise ValueError("Risk-free return rate should be non-negative")    elif (mu < rf_rate).all():        raise ValueError(            f"No expected returns are greater than risk-free return rate of {rf_rate}"        )    result = _max_sharpe_ratio_numpy(cov_matrix, mu, rf_rate, create_env)    if indices is not None:        result.x = pd.Series(data=result.x, index=indices)    return resultdef _max_sharpe_ratio_numpy(cov_matrix, mu, rf_rate, create_env):    with create_env() as env, gp.Model("sharpe_ratio", env=env) as model:        y = model.addMVar(mu.size, name="y")        model.addConstr((mu - rf_rate) @ y == 1)        model.setObjective(y @ cov_matrix @ y, sense=GRB.MINIMIZE)        z = model.addMVar(mu.size, vtype=GRB.BINARY, name="z")# Variable ysum equal to sum of y variables        ysum = model.addMVar(shape=(1,), name="ysum")        model.addConstr(ysum == y.sum(), name="set_ysum")# Minimum/maximum holdings# lb * z <= x <= ub * z# lb * z <= y / ysum <= ub * z# lb * z * ysum <= y <= ub * z * ysum        model.addConstr(0.03 * z * ysum <= y, name="minimum_holding")        model.addConstr(y <= 0.13 * z * ysum, name="maximum_holding")# Cardinality constraint        model.addConstr(z.sum() == 15, name="cardinality_constraint")# Target return# mu @ x >= r# mu @ (y / ysum) >= r# mu @ y >= r * ysum        r = np.mean(mu)        model.addConstr(mu @ y >= r * ysum, name="target_return_rate")                model.optimize()        # Translate solution to original variable space        x = y.X / y.X.sum()        ret = mu @ x        risk = x @ cov_matrix @ x        sharpe_ratio = (ret - rf_rate) / math.sqrt(risk)        return SharpeRatioResult(x, sharpe_ratio, ret, risk)@dataclassclass SharpeRatioResult:    """    Data class representing the portfolio that maximizes the Sharpe ratio.    Attributes    ----------    x : ndarray or Series        The relative portfolio allocations :math:x    sharpe_ratio : float        The Sharpe ratio of the portfolio    ret : float        The estimated return :math:\mu^T x of the portfolio    risk : float        The estimated risk :math:x^T \Sigma x of the portfolio    """    x: Union[np.ndarray, pd.Series]    sharpe_ratio: float    ret: float    risk: float

And the efficient frontier can be built by varying the target return right?

Cheers,

Marcell

• Gurobi Staff

Yes, that should work. Varying the target return should produce the efficient frontier.

Hi!

I have a similar problem and I am making use of the transformed representation. All assets have a minimum and maximum holding. In total I have three assets, say A1, A2 and A3. Now this is different in my case:

* The asset A1 is special and can only be taken into the portfolio when A2 has passed a certain threshold. That is easy to implement with the integers z and ysum (as defined above).

* A1 should not count into the weights (x in the original problem in OptiMods) constraint, while its mean and covariance should be considered as usual. So the sum of the weights of A2 and A3 needs to be one, while the sum of A1, A2 and A3 can be even larger (1.15 due to the maximum holding of A1 is 0.15). I struggle bringing this constraint into the transformed formulation. Could you help?

• Gurobi Staff

My initial thought is that you don't need a special constraint. In the original problem, $$x$$ represents what proportion of your budget is allocated to each asset. By definition, these numbers should sum to 1. Maybe another way of looking at it: if you invested $0.15 in A1,$0.40 in A2, and \$0.60 in A3, this would be represented in the original model as $$x_1=0.13043$$, $$x_2=0.34783$$, and $$x_3=0.52174$$.

Do the results not make sense without a special constraint? It would be helpful to see a small example showing that.

Dear Eli Towle,

The code works perfectly for my basic cases but when I try to run it for 843 assets it runs for a very long time and I get this error message:

Explored 5308982 nodes (64204304 simplex iterations) in 36602.53 seconds (7673.94 work units)Thread count was 8 (of 8 available processors)Solution count 1: 19.9949 Solve interrupted (error code 10001)Best objective 1.999494801962e+01, best bound 1.935112130712e+01, gap 3.2199%Traceback (most recent call last):  Cell In[8], line 1    result_US_india_full=max_sharpe_ratio(sigma, mu)  File ~\anaconda3\Lib\site-packages\gurobi_optimods\utils.py:137 in optimod_decorated    return func(*args, create_env=create_env, **kwargs)  Cell In[1], line 74 in max_sharpe_ratio    result = _max_sharpe_ratio_numpy(cov_matrix, mu, rf_rate, create_env)  Cell In[1], line 110 in _max_sharpe_ratio_numpy    model.optimize()File src\\gurobipy\\model.pxi:893 in gurobipy.Model.optimize GurobiError: Out of memory

How can it be solved for this problem? I saw parallelization could solve the problem but the problem is for me it is taking even more time in that case, or is it possible to just set the gap higher or how can I solve this problem?