Skip to main content

How to assign a new variable equal to min of two other variable (Gurobi in Python)

Ongoing

Comments

20 comments

  • Jonasz Staszek
    Community Moderator Community Moderator
    Gurobi-versary
    Thought Leader
    First Question

    Hi Ye,

    1. MIN, MAX Constraints:

    you are looking for a min_ constraint, as described here.

    There are also max_ constraints, described here, which generally work similarly.

    Assuming that you have defined your variables someplace earlier in your code, its part could look as follows:

    if P[2,i]>(1-alpha)*P[0,i]+P[4,i]:
    y1_1[0,i] = 0
      m.addConstr(y2_1[0,i] = min_(s2_1[0,i],q))
      m.addConstr(x1_1[0,i] = max_(0,s1_1[0,i]-z_1_x1[0,i]))
      m.addConstr(x2_1[0,i] = max_(0, s2_1[0,i]-q-z_2_x1[0,i]))

    On a side note:

    y1_1[0,i] = 0

    will put an integer 0 in your dictionary y1_1, where you store variables otherwise. Is that the expected behavior?

    2. Sum in objective function:

    If your R1 is a Gurobi's tupledict, you may generate a sum of all variables contained therein by:

    R1.sum()

    Hope this helps.

    Best regards
    Jonasz

     

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Ye,

    In addition to what Jonasz already stated, the general constraints methods accept Var objects only. This means that you cannot directly provide a sum or difference of two variable to the min_ / max_ function and have to introduce additional auxiliary variables.

    [...]
    auxVar = m.addVar(lb=-GRB.INFINITY,name="auxVar_%d"%(i))
    m.addVar(auxVar == s1_1[0,i]-z_1_x1[0,i], name="auxConstr_%d"%(i))
    m.addConstr(x1_1[0,i] = max_(0,auxVar)

    Moreover, the \(\texttt{if}\)-clauses will work only if your \(P\) object holds pre-defined parameter values. It will not work if \(P\) are optimization variables. In the case that \(P\) are indeed optimization variables, you have to introduce additional binary variables and formulate your model with the help of conditional statements. Please have a look at How do I model conditional statements in Gurobi? for more details.

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Thank you so much, Jonasz and Jaromił! I really appreciated it! 

    - Jonasz, yes, I tried writing into constraints before, but I didn't know the trick of writing the objective function as R1.sum(). Also thanks for pointing out y1_1[0,i] = 0, I also wrote it into a constraint to avoid further issues. 

    -  Jaromił, thanks so much for pointing this out. I was struggling with this as well! Thanks for the suggestion!

    Best,

    Ye

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    I do have another question regarding writing the constraint:

    y1_1[i] = min( s1_1[i], max(q-s2_1[i], 0) )
     
    I wrote it into two constraints by adding two additional variables y1_10 and y_11, but it seems when I added two more constrains, then the model became unsolvable. I wrote as follows:
            m_ML2.addConstr(y1_10 == q - s2_1[i])
            m_ML2.addConstr(y1_11 == gp.max_(y1_10,0))
            m_ML2.addConstr(y1_1[i] == gp.min_(s1_1[i], y1_11) )
    The error is: Model is infeasible or unbounded. When I only used one constraint it worked, but it's not the constraint I wanted.
            m_ML2.addConstr(y1_1[i] == q - s2_1[i])   #only write this one works
     
    I would really appreciate it if someone could answer this question. It is really confusing why I cannot specify several constraints related to one variable. 
     
    Thanks,
    Ye
    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Did you explicitly set the auxiliary variables to be free by setting their lower bound to \(-\infty\)?

    auxVar = m.addVar(lb=-GRB.INFINITY,name="auxVar")

    You can check the Knowledge Base article How do I resolve the error "model is infeasible or unbounded"?

    If you did this already, could you please provide a minimal reproducible example showing the issue?

    Best regards, 
    Jaromił

     

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Thanks, Jaromił for answering this question! I tried what you suggested, the model is still not solvable when T=3 (it works when T=2). Here is the minimal reproducible example showing the issue:

    import numpy as np
    import gurobipy as gp
    from gurobipy import GRB, abs_
    T = 3
    q = 98107
    alpha = 0.154
    beta = 0.02
    P0 = np.random.normal(30, 4, T)
    P = np.array((P0, P0-10, P0, P0-20, P0-25))
    w1 = np.random.normal(28937.89109,7875.353526,T+1)#from data, mean, sd, size
    w2 = np.random.normal(62906.31683,9551.438123,T+1) #from data
     
    try:
        m_ML2 = gp.Model("Model")
        
        # define decision variables
        y1_10 = m_ML2.addVar(vtype = GRB.CONTINUOUS, name = 'y10', lb=-GRB.INFINITY)
       
        x1_1 = m_ML2.addVars(T, name = 'x1_1', lb = 0)
        x2_1 = m_ML2.addVars(T, name = 'x2_1', lb = 0)
        y1_1 = m_ML2.addVars(T, name = 'y1_1', lb = 0)
        y2_1 = m_ML2.addVars(T, name = 'y2_1', lb = 0)
        
        z1_1 = m_ML2.addVars(T, name = 'z1_1', lb = 0)
        z2_1 = m_ML2.addVars(T, name = 'z2_1', lb = 0)
        
        s1_1 = m_ML2.addVars(T, name = 's1_1', lb = 0)
        s2_1 = m_ML2.addVars(T, name = 's2_1', lb = 0)
              
        R1 = m_ML2.addVars(T, name = 'R1')
        #period 0
        m_ML2.addConstr(z1_1[0] == 5000)
        m_ML2.addConstr(z2_1[0] == 10000)
        m_ML2.addConstr(y2_1[0] == 0)
        m_ML2.addConstr(x2_1[0] == w2[0])
        
        m_ML2.addConstr(y1_1[0] == 0 )
        m_ML2.addConstr(x1_1[0] == w1[0])
        #reward
        m_ML2.addConstr(R1[0] == P[0,0]*(y2_1[0]+(1-alpha)*y1_1[0])+P[1,0]*x2_1[0]+P[2,0]*x1_1[0]-P[4,0]*(q-y1_1[0]-y2_1[0])-P[3,0]*(w1[0]+w2[0]))
        
        for i in range(1,T):    
            m_ML2.addConstr(z1_1[i] == 5000)
            m_ML2.addConstr(z2_1[i] == 10000)            
            m_ML2.addConstr(s1_1[i] == w1[i])
            m_ML2.addConstr(s2_1[i] == w2[i]+z1_1[i-1]+z2_1[i-1])
            m_ML2.addConstr(y2_1[i] == gp.min_(q,s2_1[i]) )
            m_ML2.addConstr(x2_1[i] == 0)
            
    #        m_ML2.addConstr(y1_1[i] == q - y2_1[i]) #only using this one works instead of the following two constrs

            m_ML2.addConstr(y1_10 == q - y2_1[i])
            m_ML2.addConstr(y1_1[i] == gp.min_(s1_1[i], y1_10) )


            m_ML2.addConstr(x1_1[i] == 0 )
            #reward
            m_ML2.addConstr(R1[i] == P[0,i]*(y2_1[i]+(1-alpha)*y1_1[i])+P[1,i]*x2_1[i]+P[2,i]*x1_1[i]-P[3,i]*(z1_1[i]+z2_1[i])-P[4,i]*(q-y1_1[i]-y2_1[i])-P[3,i]*(w1[i]+w2[i]))

           
        m_ML2.setObjective(R1.sum(), GRB.MAXIMIZE)         
        # Optimize model
        m_ML2.optimize()
            
        for v in m_ML2.getVars():
            print('%s %g' % (v.varName, v.x))

        print('Obj: %g' % m_ML2.objVal)

    except gp.GurobiError as e:
        print('Error code ' + str(e.errno) + ': ' + str(e))

    except AttributeError:
        print('Encountered an attribute error')

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    If you disable DualReductions then compute an IIS

    m_ML2.setParam("DualReductions",0)

    [...]
    m_ML2.optimize()
    m_ML2.computeIIS()
    m_ML2.write("iis.ilp")

    and have a look at the IIS file \(\texttt{iis.ilp}\) you see that

     R0: z1_1[0] = 5000
     R1: z2_1[0] = 10000
     R10: - z1_1[0] - z2_1[0] + s2_1[1] = 59970.24755004315
     R12: y10 + y2_1[1] = 98107
     R18: - z1_1[1] - z2_1[1] + s2_1[2] = 78968.96449813887
     R20: y10 + y2_1[2] = 98107
    Bounds
     y10 free
     y2_1[1] free
     y2_1[2] free
     z1_1[0] free
     z2_1[0] free
     s2_1[1] free
     s2_1[2] free
    General Constraints
     GC0: y2_1[1] = MIN ( s2_1[1] , 98107 )
     GC2: y2_1[2] = MIN ( s2_1[2] , 98107 )

    this set of constraints and variables cannot be fulfilled. It is because

    \[\begin{align}
    s_{2,1,1} &= 59970.24755004315 + 10000 + 5000 = 74970.24755004315\\
    s_{2,1,2} &= 78968.96449813887 + 10000 + 5000 = 93968.96449813887\\
    y_{2,1,1} &= \min(74970.24755004315, 98107) = 74970.24755004315\\
    y_{2,1,2} &= \min(93968.96449813887, 98107) = 93968.96449813887\\
    \end{align}\]
    And then you also finally have

    \[\begin{align}
    y_{10} &= 98107 - 74970.24755004315\\
    y_{10} &= 98107 - 93968.96449813887
    \end{align}\]
    which obviously cannot hold.

    Now it is up to you to double check your model and understand why this happens. My guess is that you probably need more than one \(y_{10}\). You probably need \(y_{10i}\) with one \(i\) for every \(\min\) term.

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Thank you so much, Jaromił!!! I redefine y_10 as a vector then it works! I thought the new value of y_10 will overwrite the previous value so that's why I only used one variable y_10 before. 

    The method you adopted to find out what went wrong is very helpful and it can help me debug in the future! I wonder how you found iis.ilp file. Since I am using Gurobi in Spyder (using Anaconda), when I wrote the following,

    m_ML2.setParam("DualReductions",0)
    [...]
    m_ML2.optimize()
    m_ML2.computeIIS()
    m_ML2.write("iis.ilp")

    I only have this in the output console instead of what you got (some specific numerical values):

    Computing Irreducible Inconsistent Subsystem (IIS)...

               Constraints          |            Bounds           |  Runtime
          Min       Max     Guess   |   Min       Max     Guess   |
    --------------------------------------------------------------------------
            0        27         -         0        27         -           0s
            9         9         -         1         1         -           0s

    IIS computed: 9 constraints, 1 bounds
    IIS runtime: 0.07 seconds
    Encountered an attribute error

    Thanks a lot!

    Best,

    Ye

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Ye,

    m_ML2.write("iis.ilp")

    will generate the \(\texttt{iis.ilp}\) file in the working directory of your Spyder/Anaconda installation. It is easiest to just search your machine for it. It might be possible that you are missing writing permissions. Then your Spyder/Anaconda environment will not generate any files. In this case, it is best to execute your IDE as admin.

    Please note that the provided code only works when your model is guaranteed to be infeasible, because an IIS can only be computed for infeasible models. A correct way of implementing an IIS computation would be

    m_ML2.optimize()
    if (m_ML2.Status == GRB.INFEASIBLE):
    m_ML2.computeIIS()
    m_ML2.write("iis.ilp")

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Jaromił, thank you so much for your answers, it worked! You really helped me move forward with my research project!!!

    One more question regarding your previous response on How do I model conditional statements in Gurobi? I encountered a very similar situation.

    The problem looks like this:

    m.addConstr(x == c1*x1+c2*x2+c3*x3)

    m.addConstr( x>=y+eps-M*(1-b) )

    m.addConstr( x<=y+ M*b )

    The only difference is that x is a linear expression of several optimization vars: x1, x2, x3. I have the error: Unsupported type (<class 'numpy.ndarray'>) for LinExpr addition argument. 

    I would very much appreciate your help! Thank you in advance!

    Best,

    Ye

     

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Ye,

    Could you share how you define \(x\)? The error indicates that a numpy array of parameters is used instead of a variable tupledict. Could you please try to construct a minimal working example reproducing the issue?

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Thank you so much,  Jaromił! I really appreciate your response! When I was trying to construct a minimal working example, I found out that I used a vector instead of a scaler for the coefficient for my x. Then the big M approach you introduced in the post (How do I model conditional statements in Gurobi?) worked! 

    One more question about resetting the model. I need to run this model more than one time. Since each period has different prices (and other exogenous parameters) so the model has different parameters in each run. When I run the model a lot of times, then the speed becomes very slow, it took me multiple days to run one time. I wonder if this is something I can improve upon.

    I used resetParams() to reset parameters for each run: 

                m_ML2 = gp.Model()
                m_ML2.reset(0) # since I am running model multiple times via looping, I reset the model for each run
                m_ML2.resetParams()

    The issue looks like this (time here in the last column can take up to thousands of seconds):

    Attached is the minimal working example. When Tmax is a small number (around d<30), it works. But if Tmax= 101 and n>2, then this situation occurs. 

    import random
    import numpy as np
    import matplotlib.pyplot as plt
    from scipy.stats import norm
    import gurobipy as gp
    import numpy as np
    from gurobipy import GRB, abs_
    #set the numer of run times here
    Tmax = 100  
    nmax = 1

    rng = np.random.default_rng(123)
    q = 98107
    alpha = 0.154
    beta = 0.02
    Twarmup = 101
    eps = 0.00002
    M = 1000000
    W0_factor = np.array([[-0.050083, -0.13667, 0.0020002], #c
                         [0.62053, 0.36688, 0.8887], # ar1
                         [0.52539, 0.39767, -0.70058], # ma1
                         [4.6059, 3.7998, 0.05894]]) # var
                            #Pi_O  Pi_M    Pi_F
                            
    W0_OTC = np.array([[14.617, -34.846, -8.940, 1.037, -0.089], #c
                      [0.165, -0.274, -0.183, 0.806, -0.183], # ar1
                      [0.358, 0.342, 0.454, -0.253, 0.454], # ma1
                      [0.11, 0.006, -0.027, 0.026, -0.027], # sma
                      [0.876, 2.207, 2.284, 0.005, 0.023], #Pi_O
                      [0.673, 0.471, 0.328, -0.007, 0.003], #Pi_M
                      [0.237, 1.101, 0.191, 0.001, 0.002], #Pi_F
                      [4.839, 47.704, 16.58, 0.020227, 0.0016579]]) #var
                       #p_c   p_1     p_2     c_h        c_p 
    # Cholesky decomposition matrix
    cov = [[5,   0.5, 0.4, 0.3, 0.2], 
           [0.5, 4,   0.4, 0.2, 0.1], 
           [0.4, 0.5, 3,   0.3, 0.2], 
           [0.3, 0.2, 0.15, 2,  0.1],
           [0.3, 0.2, 0.15, 0.1, 1]]
    L = np.linalg.cholesky(cov)       

    V_ML = np.zeros((Tmax-1))    
    for T in range(1, Tmax):
        print(T)
        V2 = np.zeros((nmax))
        
        for n in range(nmax):
        
            Pi = np.zeros((3, Tmax))
            P = np.zeros((5, Tmax))
            pF = np.zeros((1, Tmax))
            tempPi = np.zeros((3, Tmax))
            temp = np.zeros((5,Tmax+1))
            w1 = np.random.normal(28937.89109,7875.353526,T+1)#from data, mean, sd, size
            w2 = np.random.normal(62906.31683,9551.438123,T+1) #from data

            # Warm up factor price
            u_O = np.random.normal(0,np.sqrt(W0_factor[3,0]),Twarmup+53)
            u_M = np.random.normal(0,np.sqrt(W0_factor[3,1]),Twarmup+53)
            u_F = np.random.normal(0,np.sqrt(W0_factor[3,2]),Twarmup+53)
            Pi_O = np.zeros(Twarmup+53)
            Pi_M = np.zeros(Twarmup+53)
            Pi_F = np.zeros(Twarmup+53)
            
            for t in range(2,len(u_O)):
                Pi_O[t] = W0_factor[0,0] + (1 + W0_factor[1,0])*Pi_O[t-1] - W0_factor[1,0]*Pi_O[t-2] + u_O[t] + W0_factor[2,0]*u_O[t-1]
            
            for t in range(2,len(u_M)):
                Pi_M[t] = W0_factor[0,1] + (1 + W0_factor[1,1])*Pi_M[t-1] - W0_factor[1,1]*Pi_M[t-2] + u_M[t] + W0_factor[2,1]*u_M[t-1]
            
            for t in range(2,len(u_F)):
                Pi_F[t] = W0_factor[0,2] + (1 + W0_factor[1,2])*Pi_F[t-1] - W0_factor[1,2]*Pi_F[t-2] + u_F[t] + W0_factor[2,2]*u_F[t-1]
            
            Pi_O = Pi_O + 25  
            Pi_M = Pi_M + 55
            Pi_F = Pi_F + 2.5
            Pi_O[Pi_O<0]=0
            Pi_M[Pi_M<0]=0
            Pi_F[Pi_F<0]=0
            
            #warm up OTC prices
            w_C = np.random.normal(0,np.sqrt(W0_OTC[7,0]),Twarmup+53)
            w_1 = np.random.normal(0,np.sqrt(W0_OTC[7,1]),Twarmup+53)
            w_2 = np.random.normal(0,np.sqrt(W0_OTC[7,2]),Twarmup+53)
            w_H = np.random.normal(0,np.sqrt(W0_OTC[7,3]),Twarmup+53)
            w_P = np.random.normal(0,np.sqrt(W0_OTC[7,4]),Twarmup+53)
            P_C = np.zeros(Twarmup+53)  
            P_1 = np.zeros(Twarmup+53)  
            P_2 = np.zeros(Twarmup+53)  
            C_H = np.zeros(Twarmup+53)  
            C_P = np.zeros(Twarmup+53) 
            
            for t in range(53,len(w_C)): ## may be rewritten as function later        
                P_C[t] = W0_OTC[0,0] + P_C[t-1]*W0_OTC[1,0] + Pi_O[t]*W0_OTC[4,0] + Pi_M[t]*W0_OTC[5,0] + Pi_F[t]*W0_OTC[6,0] + w_C[t] + W0_OTC[3,0]*w_C[t-52] + W0_OTC[2,0]*w_C[t-1] + W0_OTC[2,0]*W0_OTC[3,0]*w_C[t-53]
            
            for t in range(53,len(w_1)):
                P_1[t] = W0_OTC[0,1] + P_1[t-1]*W0_OTC[1,1] + Pi_O[t]*W0_OTC[4,1] + Pi_M[t]*W0_OTC[5,1] + Pi_F[t]*W0_OTC[6,1] + w_1[t] + W0_OTC[3,1]*w_1[t-52] + W0_OTC[2,1]*w_1[t-1] + W0_OTC[2,1]*W0_OTC[3,1]*w_1[t-53]
            
            for t in range(53,len(w_2)):
                P_2[t] = W0_OTC[0,2] + P_2[t-1]*W0_OTC[1,2] + Pi_O[t]*W0_OTC[4,2] + Pi_M[t]*W0_OTC[5,2] + Pi_F[t]*W0_OTC[6,2] + w_2[t] + W0_OTC[3,2]*w_2[t-52] + W0_OTC[2,2]*w_2[t-1] + W0_OTC[2,2]*W0_OTC[3,2]*w_2[t-53]
            
            for t in range(53,len(w_H)):
                C_H[t] = W0_OTC[0,3] + C_H[t-1]*W0_OTC[1,3] + Pi_O[t]*W0_OTC[4,3] + Pi_M[t]*W0_OTC[5,3] + Pi_F[t]*W0_OTC[6,3] + w_H[t] + W0_OTC[3,3]*w_H[t-52] + W0_OTC[2,3]*w_H[t-1] + W0_OTC[2,3]*W0_OTC[3,3]*w_H[t-53]
                
            for t in range(53,len(w_P)):
                C_P[t] = W0_OTC[0,4] + C_P[t-1]*W0_OTC[1,4] + Pi_O[t]*W0_OTC[4,4] + Pi_M[t]*W0_OTC[5,4] + Pi_F[t]*W0_OTC[6,4] + w_P[t] + W0_OTC[3,4]*w_P[t-52] + W0_OTC[2,4]*w_P[t-1] + W0_OTC[2,4]*W0_OTC[3,4]*w_P[t-53]
            
            
            P_C[P_C<0]=0
            P_1[P_1<0]=0
            P_2[P_2<0]=0
            C_H[C_H<0]=0
            C_P[C_P<0]=0

            Pi[:,0] = np.stack((Pi_O[-1], Pi_M[-1], Pi_F[-1]))
            #OTC prices fist period             
            P[:,0] = np.stack((P_C[-1], P_2[-1], P_1[-1], C_H[-1], C_P[-1]))
            pF[0,0] = C_H[-1]+rng.normal(0,C_H[-1]/100) #mean and standard deviation, forward market price for the first period of the spot market price
            # generate 2nd to 100 period prices
            W_factor = np.array([[-0.050083, 1+0.62053, -0.62053],
                        [-0.13667, 1+0.36688, -0.36688],
                        [0.0020002, 1+0.8887, -0.8887]])
            tempPi[0,0] = W_factor[0,:]@np.array([1,Pi_O[-1],0]) #inner product
            tempPi[1,0] = W_factor[1,:]@np.array([1, Pi_M[-1], 0])
            tempPi[2,0] = W_factor[2,:]@np.array([1, Pi_F[-1], 0])
            tempPi[tempPi<0]=0
                
            #generate forecast for the OTC 
            W_OTC = np.array([[14.617, 0.16547, 0.87606, 0.67273, 0.23732], #P_C
                      [-8.9395, -0.18333, 2.2838, 0.32753, 0.19068], #P_2
                      [-34.846, -0.27437, 2.2071, 0.47132, 1.1006], #P_1
                      [1.0374, 0.80573, 0.004734, -0.0065202, 0.0012593], #C_H
                      [-0.089304, -0.18324, 0.022836, 0.0032739, 0.0019056]]) #C_P
            temp[0,0] = W_OTC[0,:]@np.hstack((1, P_C[-1], tempPi[:,0])) #P_C
            temp[1,0] = W_OTC[1,:]@np.hstack((1, P_2[-1], tempPi[:,0])) #P_2
            temp[2,0] = W_OTC[2,:]@np.hstack((1, P_1[-1], tempPi[:,0])) #P_1
            temp[3,0] = W_OTC[3,:]@np.hstack((1, C_H[-1], tempPi[:,0])) #C_H
            temp[4,0] = W_OTC[4,:]@np.hstack((1, C_P[-1], tempPi[:,0])) #C_P
            temp[temp<0]=0
            
            for i in range(1,T):   
                #Factor prices DGP
                inno_Pi_O = np.random.normal(0,np.sqrt(4.6059),T+1) 
                inno_Pi_M = np.random.normal(0,np.sqrt(3.7998),T+1) 
                inno_Pi_F = np.random.normal(0,np.sqrt(0.05894),T+1) 
                MA_factor = np.array([0.52539, 0.39767, -0.70058]) 
                if i == 1:
                    Pi[0,i] = W_factor[0,:]@np.hstack([1, Pi[0,i-1], 0]) + inno_Pi_O[i] + MA_factor[0]*inno_Pi_O[i-1]
                    Pi[1,i] = W_factor[1,:]@np.hstack([1, Pi[1,i-1], 0]) + inno_Pi_M[i] + MA_factor[1]*inno_Pi_M[i-1]
                    Pi[2,i] = W_factor[2,:]@np.hstack([1, Pi[2,i-1], 0]) + inno_Pi_F[i] + MA_factor[2]*inno_Pi_F[i-1]
                else:
                    Pi[0,i] = W_factor[0,:]@np.hstack([1, Pi[0,i-1], Pi[0,i-2]])+ inno_Pi_O[i] + MA_factor[0]*inno_Pi_O[i-1]
                    Pi[1,i] = W_factor[1,:]@np.hstack([1, Pi[1,i-1], Pi[1,i-2]]) + inno_Pi_M[i] + MA_factor[1]*inno_Pi_M[i-1]
                    Pi[2,i] = W_factor[2,:]@np.hstack([1, Pi[2,i-1], Pi[2,i-2]]) + inno_Pi_F[i] + MA_factor[2]*inno_Pi_F[i-1]
                  
                Pi[Pi<0]=0 
                # generate forecast/expectation for the next period's factor price, t=2
                # use the estimated factor price parameters      
                tempPi[0,i] = W_factor[0,:]@np.array([1, Pi[0,i], Pi[0,i-1]])#forecast for the ith period
                tempPi[1,i] = W_factor[1,:]@np.array([1, Pi[1,i], Pi[1,i-1]]) 
                tempPi[2,i] = W_factor[2,:]@[1, Pi[2,i], Pi[2,i-1]] 
                tempPi[tempPi<0]=0 
            
                #OTC prices DGP             
                #generate forecast for the OTC
                inno_P_C = np.random.normal(0,np.sqrt(4.839),T+52) 
                inno_P_2 = np.random.normal(0,np.sqrt(16.58),T+52) 
                inno_P_1 = np.random.normal(0,np.sqrt(47.704),T+52) 
                inno_C_H = np.random.normal(0,np.sqrt(0.020227),T+52) 
                inno_C_P = np.random.normal(0,np.sqrt(0.0016579),T+52) 
                MA_OTC = np.array([0.35802,  0.45427,  0.34237,  -0.25309,  0.45424]) 
                SMA_OTC = np.array([0.11019,  -0.027154,  0.0063966,  0.024557,  -0.027257])
                      
                P[0,i] = W_OTC[0,:]@np.hstack([1, P[0,i-1],  tempPi[:,i]]) + inno_P_C[i+52] + MA_OTC[0]*inno_P_C[i+51] + SMA_OTC[0]*inno_P_C[i] + MA_OTC[0]*SMA_OTC[0]*inno_P_C[i-1] #P_C
                P[1,i] = W_OTC[1,:]@np.hstack([1, P[1,i-1],  tempPi[:,i]]) + inno_P_2[i+52] + MA_OTC[1]*inno_P_2[i+51] + SMA_OTC[1]*inno_P_2[i] + MA_OTC[1]*SMA_OTC[1]*inno_P_2[i-1] #P_2
                P[2,i] = W_OTC[2,:]@np.hstack([1, P[2,i-1],  tempPi[:,i]]) + inno_P_1[i+52] + MA_OTC[2]*inno_P_1[i+51] + SMA_OTC[2]*inno_P_1[i] + MA_OTC[2]*SMA_OTC[2]*inno_P_1[i-1] #P_1
                P[3,i] = W_OTC[3,:]@np.hstack([1, P[3,i-1],  tempPi[:,i]]) + inno_C_H[i+52] + MA_OTC[3]*inno_C_H[i+51] + SMA_OTC[3]*inno_C_H[i] + MA_OTC[3]*SMA_OTC[3]*inno_C_H[i-1] #C_H
                P[4,i] = W_OTC[4,:]@np.hstack([1, P[4,i-1],  tempPi[:,i]]) + inno_C_P[i+52] + MA_OTC[4]*inno_C_P[i+51] + SMA_OTC[4]*inno_C_P[i] + MA_OTC[4]*SMA_OTC[4]*inno_C_P[i-1] #C_P
                P[P<0]=0
                #forward market
                pF[0,i] = P[3,i]+rng.normal(0,P[3,i]/100) #mean and standard deviation
                pF[pF<0]=0
            
                #generate forecast for the OTC 
                temp[0,i] = W_OTC[0,:]@np.hstack([1, P[0,i],  tempPi[:,i]])  #P_C
                temp[1,i] = W_OTC[1,:]@np.hstack([1, P[1,i],  tempPi[:,i]])  #P_2
                temp[2,i] = W_OTC[2,:]@np.hstack([1, P[2,i],  tempPi[:,i]])  #P_1
                temp[3,i] = W_OTC[3,:]@np.hstack([1, P[3,i],  tempPi[:,i]])  #C_H
                temp[4,i] = W_OTC[4,:]@np.hstack([1, P[4,i],  tempPi[:,i]])  #C_P
                temp[temp<0]=0 
            
            #Forecast for last period 
            #T = 100
            tempPi[0,T] = W_factor[0,:]@np.hstack([1, Pi[0,T], Pi[0,T-1]]) #forecast for the ith period
            tempPi[1,T] = W_factor[1,:]@np.hstack([1, Pi[1,T], Pi[1,T-1]])
            tempPi[2,T] = W_factor[2,:]@np.stack([1, Pi[2,T], Pi[2,T-1]])
            tempPi[tempPi<0]=0 
            
            inno_P_C = np.random.normal(0,np.sqrt(4.839),T+52+1) 
            inno_P_2 = np.random.normal(0,np.sqrt(16.58),T+52+1) 
            inno_P_1 = np.random.normal(0,np.sqrt(47.704),T+52+1) 
            inno_C_H = np.random.normal(0,np.sqrt(0.020227),T+52+1) 
            inno_C_P = np.random.normal(0,np.sqrt(0.0016579),T+52+1) 
            MA_OTC = np.array([0.35802,  0.45427,  0.34237,  -0.25309,  0.45424]) 
            SMA_OTC = np.array([0.11019,  -0.027154,  0.0063966,  0.024557,  -0.027257])
                
            P[0,T] = W_OTC[0,:]@np.hstack([1, P[0,T-1],  tempPi[:,T]]) + inno_P_C[T+52] + MA_OTC[0]*inno_P_C[T+51] + SMA_OTC[0]*inno_P_C[T] + MA_OTC[0]*SMA_OTC[0]*inno_P_C[T-1]  #P_C
            P[1,T] = W_OTC[1,:]@np.hstack([1, P[1,T-1],  tempPi[:,T]]) + inno_P_2[T+52] + MA_OTC[1]*inno_P_2[T+51] + SMA_OTC[1]*inno_P_2[T] + MA_OTC[1]*SMA_OTC[1]*inno_P_2[T-1]  #P_2
            P[2,T] = W_OTC[2,:]@np.hstack([1, P[2,T-1],  tempPi[:,T]]) + inno_P_1[T+52] + MA_OTC[2]*inno_P_1[T+51] + SMA_OTC[2]*inno_P_1[T] + MA_OTC[2]*SMA_OTC[2]*inno_P_1[T-1]  #P_1
            P[3,T] = W_OTC[3,:]@np.hstack([1, P[3,T-1],  tempPi[:,T]]) + inno_C_H[T+52] + MA_OTC[3]*inno_C_H[T+51] + SMA_OTC[3]*inno_C_H[T] + MA_OTC[3]*SMA_OTC[3]*inno_C_H[T-1]  #C_H
            P[4,T] = W_OTC[4,:]@np.hstack([1, P[4,T-1],  tempPi[:,T]]) + inno_C_P[T+52] + MA_OTC[4]*inno_C_P[T+51] + SMA_OTC[4]*inno_C_P[T] + MA_OTC[4]*SMA_OTC[4]*inno_C_P[T-1]  #C_P
            P[P<0]=0 
            #forward market
            pF[0,T] = P[3,T]+rng.normal(0,P[3,T]/100) #mean and standard deviation
            pF[pF<0]=0
            
            # simulate demand: constant
            avg_input = 1#106453
            d = np.ones((1, Tmax))*35*avg_input
            #calculate feature prices: 
            Feature = np.dot(L, P)
            X_feature = np.vstack((np.ones((1,Tmax)), P, Feature))
            X_feature = X_feature.T
            #feature for procurement decisions
            X_feature_buy = np.vstack((np.ones((1,Tmax)), P, pF,Feature))
            X_feature_buy = X_feature_buy.T

            try:
                m_ML2 = gp.Model()
                m_ML2.reset(0) # since I am running model multiple times via looping, I reset the model for each run
                m_ML2.resetParams()

                #define feature coefficients
                alpha0 = m_ML2.addVars(T, name = 'alpha0', lb=-GRB.INFINITY) 
                alpha1 = m_ML2.addVars(T, name = 'alpha1', lb=-GRB.INFINITY)
                alpha2 = m_ML2.addVars(T, name = 'alpha2', lb=-GRB.INFINITY)
                alpha3 = m_ML2.addVars(T, name = 'alpha3', lb=-GRB.INFINITY) 
                alpha4 = m_ML2.addVars(T, name = 'alpha4', lb=-GRB.INFINITY)    
                alpha5 = m_ML2.addVars(T, name = 'alpha5', lb=-GRB.INFINITY)    
                alpha6 = m_ML2.addVars(T, name = 'alpha6', lb=-GRB.INFINITY)    
                alpha7 = m_ML2.addVars(T, name = 'alpha7', lb=-GRB.INFINITY)    
                alpha8 = m_ML2.addVars(T, name = 'alpha8', lb=-GRB.INFINITY)
                alpha9 = m_ML2.addVars(T, name = 'alpha9', lb=-GRB.INFINITY)
                alpha10 = m_ML2.addVars(T, name = 'alpha10', lb=-GRB.INFINITY)
                
                alpha0_abs = m_ML2.addVars(T, name = 'alpha0_abs', lb=0) 
                alpha1_abs = m_ML2.addVars(T, name = 'alpha1_abs', lb=0)
                alpha2_abs = m_ML2.addVars(T, name = 'alpha2_abs', lb=0) 
                alpha3_abs = m_ML2.addVars(T, name = 'alpha3_abs', lb=0) 
                alpha4_abs = m_ML2.addVars(T, name = 'alpha4_abs', lb=0) 
                alpha5_abs = m_ML2.addVars(T, name = 'alpha5_abs', lb=0) 
                alpha6_abs = m_ML2.addVars(T, name = 'alpha6_abs', lb=0) 
                alpha7_abs = m_ML2.addVars(T, name = 'alpha7_abs', lb=0) 
                alpha8_abs = m_ML2.addVars(T, name = 'alpha8_abs', lb=0) 
                alpha9_abs = m_ML2.addVars(T, name = 'alpha9_abs', lb=0) 
                alpha10_abs = m_ML2.addVars(T, name = 'alpha10_abs', lb=0)     
                
                beta0 = m_ML2.addVars(T, name = 'beta0', lb=-GRB.INFINITY) 
                beta1 = m_ML2.addVars(T, name = 'beta1', lb=-GRB.INFINITY)
                beta2 = m_ML2.addVars(T, name = 'beta2', lb=-GRB.INFINITY)
                beta3 = m_ML2.addVars(T, name = 'beta3', lb=-GRB.INFINITY)
                beta4 = m_ML2.addVars(T, name = 'beta4', lb=-GRB.INFINITY)
                beta5 = m_ML2.addVars(T, name = 'beta5', lb=-GRB.INFINITY)
                beta6 = m_ML2.addVars(T, name = 'beta6', lb=-GRB.INFINITY)
                beta7 = m_ML2.addVars(T, name = 'beta7', lb=-GRB.INFINITY)
                beta8 = m_ML2.addVars(T, name = 'beta8', lb=-GRB.INFINITY)
                beta9 = m_ML2.addVars(T, name = 'beta9', lb=-GRB.INFINITY)
                beta10 = m_ML2.addVars(T, name = 'beta10', lb=-GRB.INFINITY)
                
                beta0_abs = m_ML2.addVars(T, name = 'beta0_abs', lb=0) 
                beta1_abs = m_ML2.addVars(T, name = 'beta1_abs', lb=0)
                beta2_abs = m_ML2.addVars(T, name = 'beta2_abs', lb=0)
                beta3_abs = m_ML2.addVars(T, name = 'beta3_abs', lb=0) 
                beta4_abs = m_ML2.addVars(T, name = 'beta4_abs', lb=0) 
                beta5_abs = m_ML2.addVars(T, name = 'beta5_abs', lb=0) 
                beta6_abs = m_ML2.addVars(T, name = 'beta6_abs', lb=0) 
                beta7_abs = m_ML2.addVars(T, name = 'beta7_abs', lb=0) 
                beta8_abs = m_ML2.addVars(T, name = 'beta8_abs', lb=0) 
                beta9_abs = m_ML2.addVars(T, name = 'beta9_abs', lb=0) 
                beta10_abs = m_ML2.addVars(T, name = 'beta10_abs', lb=0) 
                
                gamma0 = m_ML2.addVars(T, name = 'gamma0', lb=-GRB.INFINITY) 
                gamma1 = m_ML2.addVars(T, name = 'gamma1', lb=-GRB.INFINITY)
                gamma2 = m_ML2.addVars(T, name = 'gamma2', lb=-GRB.INFINITY)
                gamma3 = m_ML2.addVars(T, name = 'gamma3', lb=-GRB.INFINITY)
                gamma4 = m_ML2.addVars(T, name = 'gamma4', lb=-GRB.INFINITY)
                gamma5 = m_ML2.addVars(T, name = 'gamma5', lb=-GRB.INFINITY)
                gamma6 = m_ML2.addVars(T, name = 'gamma6', lb=-GRB.INFINITY)
                gamma7 = m_ML2.addVars(T, name = 'gamma7', lb=-GRB.INFINITY)
                gamma8 = m_ML2.addVars(T, name = 'gamma8', lb=-GRB.INFINITY)
                gamma9 = m_ML2.addVars(T, name = 'gamma9', lb=-GRB.INFINITY)
                gamma10 = m_ML2.addVars(T, name = 'gamma10', lb=-GRB.INFINITY)
                
                gamma0_abs = m_ML2.addVars(T, name = 'gamma0_abs', lb=0) 
                gamma1_abs = m_ML2.addVars(T, name = 'gamma1_abs', lb=0)
                gamma2_abs = m_ML2.addVars(T, name = 'gamma2_abs', lb=0)  
                gamma3_abs = m_ML2.addVars(T, name = 'gamma3_abs', lb=0) 
                gamma4_abs = m_ML2.addVars(T, name = 'gamma4_abs', lb=0) 
                gamma5_abs = m_ML2.addVars(T, name = 'gamma5_abs', lb=0) 
                gamma6_abs = m_ML2.addVars(T, name = 'gamma6_abs', lb=0) 
                gamma7_abs = m_ML2.addVars(T, name = 'gamma7_abs', lb=0) 
                gamma8_abs = m_ML2.addVars(T, name = 'gamma8_abs', lb=0) 
                gamma9_abs = m_ML2.addVars(T, name = 'gamma9_abs', lb=0) 
                gamma10_abs = m_ML2.addVars(T, name = 'gamma10_abs', lb=0)   
                
                #define hedging feature coefficients            
                zeta0 = m_ML2.addVars(T, name = 'zeta0', lb=-GRB.INFINITY) 
                zeta1 = m_ML2.addVars(T, name = 'zeta1', lb=-GRB.INFINITY)
                zeta2 = m_ML2.addVars(T, name = 'zeta2', lb=-GRB.INFINITY)
                zeta3 = m_ML2.addVars(T, name = 'zeta3', lb=-GRB.INFINITY)
                zeta4 = m_ML2.addVars(T, name = 'zeta4', lb=-GRB.INFINITY)
                zeta5 = m_ML2.addVars(T, name = 'zeta5', lb=-GRB.INFINITY)
                zeta6 = m_ML2.addVars(T, name = 'zeta6', lb=-GRB.INFINITY)
                zeta7 = m_ML2.addVars(T, name = 'zeta7', lb=-GRB.INFINITY)
                zeta8 = m_ML2.addVars(T, name = 'zeta8', lb=-GRB.INFINITY)
                zeta9 = m_ML2.addVars(T, name = 'zeta9', lb=-GRB.INFINITY)
                zeta10 = m_ML2.addVars(T, name = 'zeta10', lb=-GRB.INFINITY)
                zeta11 = m_ML2.addVars(T, name = 'zeta11', lb=-GRB.INFINITY)

                
                zeta0_abs = m_ML2.addVars(T, name = 'zeta0_abs', lb=0) 
                zeta1_abs = m_ML2.addVars(T, name = 'zeta1_abs', lb=0)
                zeta2_abs = m_ML2.addVars(T, name = 'zeta2_abs', lb=0)
                zeta3_abs = m_ML2.addVars(T, name = 'zeta3_abs', lb=0) 
                zeta4_abs = m_ML2.addVars(T, name = 'zeta4_abs', lb=0) 
                zeta5_abs = m_ML2.addVars(T, name = 'zeta5_abs', lb=0) 
                zeta6_abs = m_ML2.addVars(T, name = 'zeta6_abs', lb=0) 
                zeta7_abs = m_ML2.addVars(T, name = 'zeta7_abs', lb=0) 
                zeta8_abs = m_ML2.addVars(T, name = 'zeta8_abs', lb=0) 
                zeta9_abs = m_ML2.addVars(T, name = 'zeta9_abs', lb=0) 
                zeta10_abs = m_ML2.addVars(T, name = 'zeta10_abs', lb=0)  
                zeta11_abs = m_ML2.addVars(T, name = 'zeta11_abs', lb=0) 
                #define procurement decisions
                q_spot = m_ML2.addVars(T, vtype = GRB.BINARY, name = 'q_spot')
                q_fwd = m_ML2.addVars(T, vtype = GRB.BINARY, name = 'q_fwd')
                Zeta_threshold = m_ML2.addVars(T, name = 'Zeta_threshold')
                # define auxiliary decision variables
                x1_10 = m_ML2.addVars(T, name = 'x1_10', lb=-GRB.INFINITY)
                x2_10 = m_ML2.addVars(T, name = 'x2_10', lb=-GRB.INFINITY)
                y1_10 = m_ML2.addVars(T, name = 'y1_10', lb=-GRB.INFINITY)
                y1_11 = m_ML2.addVars(T, name = 'y1_11', lb=-GRB.INFINITY)
                y1_12 = m_ML2.addVars(T, name = 'y1_11', lb=-GRB.INFINITY)
                y1_13 = m_ML2.addVars(T, name = 'y1_11', lb=-GRB.INFINITY)    
                y2_10 = m_ML2.addVars(T, name = 'y2_10', lb=-GRB.INFINITY)
                y2_11 = m_ML2.addVars(T, name = 'y2_11', lb=-GRB.INFINITY)
                y2_12 = m_ML2.addVars(T, name = 'y2_12', lb=-GRB.INFINITY)
                
               
                x1_1 = m_ML2.addVars(T, name = 'x1_1')
                x2_1 = m_ML2.addVars(T, name = 'x2_1')
                y1_1 = m_ML2.addVars(T, name = 'y1_1')
                y2_1 = m_ML2.addVars(T, name = 'y2_1')
                
                z1_1 = m_ML2.addVars(T, name = 'z1_1')
                z2_1 = m_ML2.addVars(T, name = 'z2_1')
                
                s1_1 = m_ML2.addVars(T, name = 's1_1')
                s2_1 = m_ML2.addVars(T, name = 's2_1')
                #Define threholds
                z_2_x1 = m_ML2.addVars(T, name = 'z_2_x1')
                z_1_x1 = m_ML2.addVars(T, name = 'z_1_x1')
                z_1_y1 = m_ML2.addVars(T, name = 'z_1_y1')
                      
                R_ML = m_ML2.addVars(T, name = 'R_ML')
                
                #period 0    
                m_ML2.addConstr(s1_1[0] == w1[0])
                m_ML2.addConstr(s2_1[0] == w2[0])
                #constraint for coefficents
                m_ML2.addConstr(alpha0_abs[0] == abs_(alpha0[0]))
                m_ML2.addConstr(alpha1_abs[0] == abs_(alpha1[0]))
                m_ML2.addConstr(alpha2_abs[0] == abs_(alpha2[0]))
                m_ML2.addConstr(alpha3_abs[0] == abs_(alpha3[0]))
                m_ML2.addConstr(alpha4_abs[0] == abs_(alpha4[0]))
                m_ML2.addConstr(alpha5_abs[0] == abs_(alpha5[0]))
                m_ML2.addConstr(alpha6_abs[0] == abs_(alpha6[0]))
                m_ML2.addConstr(alpha7_abs[0] == abs_(alpha7[0]))
                m_ML2.addConstr(alpha8_abs[0] == abs_(alpha8[0]))
                m_ML2.addConstr(alpha9_abs[0] == abs_(alpha9[0]))
                m_ML2.addConstr(alpha10_abs[0] == abs_(alpha10[0]))
                
                m_ML2.addConstr(beta0_abs[0] == abs_(beta0[0]))
                m_ML2.addConstr(beta1_abs[0] == abs_(beta1[0]))
                m_ML2.addConstr(beta2_abs[0] == abs_(beta2[0]))
                m_ML2.addConstr(beta3_abs[0] == abs_(beta3[0]))
                m_ML2.addConstr(beta4_abs[0] == abs_(beta4[0]))
                m_ML2.addConstr(beta5_abs[0] == abs_(beta5[0]))
                m_ML2.addConstr(beta6_abs[0] == abs_(beta6[0]))
                m_ML2.addConstr(beta7_abs[0] == abs_(beta7[0]))
                m_ML2.addConstr(beta8_abs[0] == abs_(beta8[0]))
                m_ML2.addConstr(beta9_abs[0] == abs_(beta9[0]))
                m_ML2.addConstr(beta10_abs[0] == abs_(beta10[0]))
                
                m_ML2.addConstr(gamma0_abs[0] == abs_(gamma0[0]))
                m_ML2.addConstr(gamma1_abs[0] == abs_(gamma1[0]))
                m_ML2.addConstr(gamma2_abs[0] == abs_(gamma2[0]))    
                m_ML2.addConstr(gamma3_abs[0] == abs_(gamma3[0])) 
                m_ML2.addConstr(gamma4_abs[0] == abs_(gamma4[0])) 
                m_ML2.addConstr(gamma5_abs[0] == abs_(gamma5[0])) 
                m_ML2.addConstr(gamma6_abs[0] == abs_(gamma6[0])) 
                m_ML2.addConstr(gamma7_abs[0] == abs_(gamma7[0])) 
                m_ML2.addConstr(gamma8_abs[0] == abs_(gamma8[0])) 
                m_ML2.addConstr(gamma9_abs[0] == abs_(gamma9[0])) 
                m_ML2.addConstr(gamma10_abs[0] == abs_(gamma10[0]))     
                
                m_ML2.addConstr(zeta0_abs[0] >= zeta0[0])
                m_ML2.addConstr(zeta1_abs[0] >= zeta1[0])
                m_ML2.addConstr(zeta2_abs[0] >= zeta2[0])  
                m_ML2.addConstr(zeta3_abs[0] >= zeta3[0])
                m_ML2.addConstr(zeta4_abs[0] >= zeta4[0])
                m_ML2.addConstr(zeta5_abs[0] >= zeta5[0]) 
                m_ML2.addConstr(zeta6_abs[0] >= zeta6[0]) 
                m_ML2.addConstr(zeta7_abs[0] >= zeta7[0]) 
                m_ML2.addConstr(zeta8_abs[0] >= zeta8[0])
                m_ML2.addConstr(zeta9_abs[0] >= zeta9[0]) 
                m_ML2.addConstr(zeta10_abs[0] >= zeta10[0])  
                m_ML2.addConstr(zeta11_abs[0] >= zeta11[0]) 
                
                m_ML2.addConstr(zeta0_abs[0] >= -zeta0[0])
                m_ML2.addConstr(zeta1_abs[0] >= -zeta1[0])
                m_ML2.addConstr(zeta2_abs[0] >= -zeta2[0])  
                m_ML2.addConstr(zeta3_abs[0] >= -zeta3[0])
                m_ML2.addConstr(zeta4_abs[0] >= -zeta4[0])
                m_ML2.addConstr(zeta5_abs[0] >= -zeta5[0]) 
                m_ML2.addConstr(zeta6_abs[0] >= -zeta6[0]) 
                m_ML2.addConstr(zeta7_abs[0] >= -zeta7[0]) 
                m_ML2.addConstr(zeta8_abs[0] >= -zeta8[0])
                m_ML2.addConstr(zeta9_abs[0] >= -zeta9[0]) 
                m_ML2.addConstr(zeta10_abs[0] >= -zeta10[0])  
                m_ML2.addConstr(zeta11_abs[0] >= -zeta11[0])  
                #purchasing decision first
                 
                m_ML2.addConstr(q_spot[0] == 1)     
                m_ML2.addConstr(Zeta_threshold[0] == zeta0[0]*X_feature_buy[0,0]+zeta1[0]*X_feature_buy[0,1]+zeta2[0]*X_feature_buy[0,2]+zeta3[0]*X_feature_buy[0,3]\
                                +zeta4[0]*X_feature_buy[0,4]+zeta5[0]*X_feature_buy[0,5]+zeta6[0]*X_feature_buy[0,6]+zeta7[0]*X_feature_buy[0,7]\
                                +zeta8[0]*X_feature_buy[0,8]+zeta9[0]*X_feature_buy[0,9]+zeta10[0]*X_feature_buy[0,10]+zeta11[0]*X_feature_buy[0,11])
                    
                m_ML2.addConstr(Zeta_threshold[0] >= pF[0,0] + eps - M*(1-q_fwd[0])) # if the forward price pF is greater than the threshold, don't buy
                m_ML2.addConstr(Zeta_threshold[0] <= pF[0,0] + M*q_fwd[0]) # pF[0] here is to buy corn&soy for the next eta period   
                #Threshold Upperbar: Holding vs. Selling
                #\bar{z}_2
                if temp[0,0]+temp[4,0]-P[1,0]-P[3,0]<=0:
                    m_ML2.addConstr(z_2_x1[0] == 0)
                else: 
                    m_ML2.addConstr(z_2_x1[0] == alpha0[0]*X_feature[0,0]+alpha1[0]*X_feature[0,1]+alpha2[0]*X_feature[0,2]
                                    +alpha3[0]*X_feature[0,3]+alpha4[0]*X_feature[0,4]+alpha5[0]*X_feature[0,5]+alpha6[0]*X_feature[0,6]
                                    +alpha7[0]*X_feature[0,7]+alpha8[0]*X_feature[0,8]+alpha9[0]*X_feature[0,9]+alpha10[0]*X_feature[0,10])
                #\bar{z}_1    
                if P[2,0] <= min(temp[1,0],temp[0,0]+temp[4,0])-P[3,0]:
                    m_ML2.addConstr(z_1_x1[0] == beta0[0]*X_feature[0,0]+beta1[0]*X_feature[0,1]+beta2[0]*X_feature[0,2]+beta3[0]*X_feature[0,3]
                                    +beta4[0]*X_feature[0,4]+beta5[0]*X_feature[0,5]+beta6[0]*X_feature[0,6]+beta7[0]*X_feature[0,7]
                                    +beta8[0]*X_feature[0,8]+beta9[0]*X_feature[0,9]+beta10[0]*X_feature[0,10])
                elif  P[2,0]>=max(temp[1,0],temp[0,0]+temp[4,0])-P[3,0]: 
                    m_ML2.addConstr(z_1_x1[0] == 0)
                else: 
                    m_ML2.addConstr(z_1_x1[0] == beta0[0]*X_feature[0,0]+beta1[0]*X_feature[0,1]+beta2[0]*X_feature[0,2]+beta3[0]*X_feature[0,3]
                                    +beta4[0]*X_feature[0,4]+beta5[0]*X_feature[0,5]+beta6[0]*X_feature[0,6]+beta7[0]*X_feature[0,7]
                                    +beta8[0]*X_feature[0,8]+beta9[0]*X_feature[0,9]+beta10[0]*X_feature[0,10])
                #Underbar: Holding vs. Fulfilling
                #\underbar{z}_1  
                if (1-alpha)* P[0,0]+P[4,0]<=min(temp[1,0],temp[0,0]+temp[4,0])-P[3,0]:
                    m_ML2.addConstr(z_1_y1[0] == gamma0[0]*X_feature[0,0]+gamma1[0]*X_feature[0,1]+gamma2[0]*X_feature[0,2]+gamma3[0]*X_feature[0,3]
                                    +gamma4[0]*X_feature[0,4]+gamma5[0]*X_feature[0,5]+gamma6[0]*X_feature[0,6]+gamma7[0]*X_feature[0,7]
                                    +gamma8[0]*X_feature[0,8]+gamma9[0]*X_feature[0,9]+gamma10[0]*X_feature[0,10])
                elif (1-alpha)* P[0,0]+P[4,0]>=max(temp[1,0],temp[0,0]+temp[4,0])-P[3,0]:
                    m_ML2.addConstr(z_1_y1[0] == 0)
                else: 
                    m_ML2.addConstr(z_1_y1[0] == gamma0[0]*X_feature[0,0]+gamma1[0]*X_feature[0,1]+gamma2[0]*X_feature[0,2]+gamma3[0]*X_feature[0,3]
                                    +gamma4[0]*X_feature[0,4]+gamma5[0]*X_feature[0,5]+gamma6[0]*X_feature[0,6]+gamma7[0]*X_feature[0,7]
                                    +gamma8[0]*X_feature[0,8]+gamma9[0]*X_feature[0,9]+gamma10[0]*X_feature[0,10])
                    
                #decision variables 5 cases
                if P[0,0]+P[4,0]<P[1,0]: 
                    if P[2,0]>(1-alpha)*P[0,0]+P[4,0]:            
                        m_ML2.addConstr(y1_1[0] == 0.)
                        m_ML2.addConstr(y2_1[0] == 0.)
                        m_ML2.addConstr(x1_10[0] == s1_1[0]-z_1_x1[0]) #auxiliary var to x1_1
                        m_ML2.addConstr(x1_1[0] == gp.max_(0., x1_10[0])) #the general constraints methods accept Var objects only, not sum/minus b/w two vars.
                        m_ML2.addConstr(x2_1[0] == s2_1[0]) 
                    else:
                        m_ML2.addConstr(y1_10[0] == s1_1[0]-z_1_y1[0])
                        m_ML2.addConstr(y1_11[0] == gp.max_(y1_10[0],0.))
                        m_ML2.addConstr(y1_1[0] == gp.min_(y1_10[0],q)) 
                        m_ML2.addConstr(x1_10[0] == s1_1[0]-z_1_x1[0]-q) 
                        m_ML2.addConstr(x1_1[0] == gp.max_(0., x1_10[0]))
                        m_ML2.addConstr(x2_1[0] == s2_1[0])
                else: 
                    if P[2,0]>(1-alpha)*P[0,0]+P[4,0]:#case 2a
                        m_ML2.addConstr(y1_1[0] == 0.)
                        m_ML2.addConstr(y2_1[0] == gp.min_(s2_1[0], q))
                        m_ML2.addConstr(x1_10[0] == s1_1[0]-z_1_x1[0])
                        m_ML2.addConstr(x1_1[0] == gp.max_(0., x1_10[0])) 
                        m_ML2.addConstr(x2_10[0] == s2_1[0]-q-z_2_x1[0]) 
                        m_ML2.addConstr(x2_1[0] == gp.max_(0., x2_10[0])) 
                    elif P[2,0]>P[1,0]-alpha* P[0,0]: #case 2b
                        m_ML2.addConstr(y1_10[0] == s1_1[0]-z_1_y1[0])
                        m_ML2.addConstr(y1_11[0] == gp.max_(y1_10[0],0.))
                        m_ML2.addConstr(y1_12[0] == q-s2_1[0])
                        m_ML2.addConstr(y1_13[0] == gp.max_(y1_12[0],0.))
                        m_ML2.addConstr(y1_1[0] == gp.min_(y1_11[0], y1_13[0]))
                        
                        m_ML2.addConstr(y2_1[0] == gp.min_(s2_1[0], q))
                        
                        m_ML2.addConstr(x1_10[0] == s1_1[0]-z_1_x1[0]-y1_1[0])
                        m_ML2.addConstr(x1_1[0] == gp.max_(0., x1_10[0])) 
                        m_ML2.addConstr(x2_10[0] == s2_1[0]-q-z_2_x1[0]) 
                        m_ML2.addConstr(x2_1[0] == gp.max_(0., x2_10[0])) 
                    else: #case 2c
                        m_ML2.addConstr(y1_10[0] == s1_1[0]-z_1_y1[0])
                        m_ML2.addConstr(y1_11[0] == gp.max_(y1_10[0], 0.))   
                        m_ML2.addConstr(y1_1[0] == gp.min_(q, y1_11[0]))
                        
                        m_ML2.addConstr(y2_10[0] == q -s1_1[0]+z_1_y1[0])
                        m_ML2.addConstr(y2_11[0] == gp.min_(q, y2_10[0]))
                        m_ML2.addConstr(y2_12[0] == gp.max_(0., y2_11[0]))
                        m_ML2.addConstr(y2_1[0] == gp.min_(s2_1[0], y2_12[0]))
                        
                m_ML2.addConstr(z1_1[0] == s1_1[0]-x1_1[0]-y1_1[0])   
                m_ML2.addConstr(z2_1[0] == s2_1[0]-x2_1[0]-y2_1[0])    
                #reward
                m_ML2.addConstr(R_ML[0] == -P[3,0]*q_spot[0]-pF[0,0]*q_fwd[0]+ P[0,0]*(y2_1[0]+(1-alpha)*y1_1[0])+P[1,0]*x2_1[0]+P[2,0]*x1_1[0]-P[3,0]*(z1_1[0]+z2_1[0])-P[4,0]*(q-y1_1[0]-y2_1[0])-P[3,0]*(w1[0]+w2[0])
                                - alpha0_abs[0] - alpha1_abs[0] - alpha2_abs[0]- alpha3_abs[0]- alpha4_abs[0]- alpha5_abs[0]- alpha6_abs[0]
                                - alpha7_abs[0]- alpha8_abs[0]- alpha9_abs[0]- alpha10_abs[0]- beta0_abs[0] - beta1_abs[0] - beta2_abs[0]
                                - beta3_abs[0] - beta4_abs[0] - beta5_abs[0] - beta6_abs[0] - beta7_abs[0] - beta8_abs[0] - beta9_abs[0]
                                - beta10_abs[0]- gamma0_abs[0] - gamma1_abs[0] - gamma2_abs[0] - gamma3_abs[0] - gamma4_abs[0] - gamma5_abs[0]
                                - gamma6_abs[0] - gamma7_abs[0] - gamma8_abs[0] - gamma9_abs[0] - gamma10_abs[0] - zeta0_abs[0] - zeta1_abs[0] - zeta2_abs[0] - zeta3_abs[0] - zeta4_abs[0] 
                                - zeta5_abs[0] - zeta6_abs[0] - zeta7_abs[0] - zeta8_abs[0] - zeta9_abs[0] - zeta10_abs[0] - zeta11_abs[0])  
                #period i
                for i in range(1,T):  
                    m_ML2.addConstr(s1_1[i] == w1[i])
                    m_ML2.addConstr(s2_1[i] == w2[i]+z1_1[i-1]+z2_1[i-1])
                        
                    m_ML2.addConstr(alpha0_abs[i] == abs_(alpha0[i]))
                    m_ML2.addConstr(alpha1_abs[i] == abs_(alpha1[i]))
                    m_ML2.addConstr(alpha2_abs[i] == abs_(alpha2[i]))
                    m_ML2.addConstr(alpha3_abs[i] == abs_(alpha3[i]))
                    m_ML2.addConstr(alpha4_abs[i] == abs_(alpha4[i]))
                    m_ML2.addConstr(alpha5_abs[i] == abs_(alpha5[i]))
                    m_ML2.addConstr(alpha6_abs[i] == abs_(alpha6[i]))
                    m_ML2.addConstr(alpha7_abs[i] == abs_(alpha7[i]))
                    m_ML2.addConstr(alpha8_abs[i] == abs_(alpha8[i]))
                    m_ML2.addConstr(alpha9_abs[i] == abs_(alpha9[i]))
                    m_ML2.addConstr(alpha10_abs[i] == abs_(alpha10[i]))
                    
                    m_ML2.addConstr(beta0_abs[i] == abs_(beta0[i]))
                    m_ML2.addConstr(beta1_abs[i] == abs_(beta1[i]))
                    m_ML2.addConstr(beta2_abs[i] == abs_(beta2[i]))
                    m_ML2.addConstr(beta3_abs[i] == abs_(beta3[i]))
                    m_ML2.addConstr(beta4_abs[i] == abs_(beta4[i]))
                    m_ML2.addConstr(beta5_abs[i] == abs_(beta5[i]))
                    m_ML2.addConstr(beta6_abs[i] == abs_(beta6[i]))
                    m_ML2.addConstr(beta7_abs[i] == abs_(beta7[i]))
                    m_ML2.addConstr(beta8_abs[i] == abs_(beta8[i]))
                    m_ML2.addConstr(beta9_abs[i] == abs_(beta9[i]))
                    m_ML2.addConstr(beta10_abs[i] == abs_(beta10[i]))
                    
                    m_ML2.addConstr(gamma0_abs[i] == abs_(gamma0[i]))
                    m_ML2.addConstr(gamma1_abs[i] == abs_(gamma1[i]))
                    m_ML2.addConstr(gamma2_abs[i] == abs_(gamma2[i]))    
                    m_ML2.addConstr(gamma3_abs[i] == abs_(gamma3[i])) 
                    m_ML2.addConstr(gamma4_abs[i] == abs_(gamma4[i])) 
                    m_ML2.addConstr(gamma5_abs[i] == abs_(gamma5[i])) 
                    m_ML2.addConstr(gamma6_abs[i] == abs_(gamma6[i])) 
                    m_ML2.addConstr(gamma7_abs[i] == abs_(gamma7[i])) 
                    m_ML2.addConstr(gamma8_abs[i] == abs_(gamma8[i])) 
                    m_ML2.addConstr(gamma9_abs[i] == abs_(gamma9[i])) 
                    m_ML2.addConstr(gamma10_abs[i] == abs_(gamma10[i])) 
                    
                    m_ML2.addConstr(zeta0_abs[i] >= zeta0[i])
                    m_ML2.addConstr(zeta1_abs[i] >= zeta1[i])
                    m_ML2.addConstr(zeta2_abs[i] >= zeta2[i])  
                    m_ML2.addConstr(zeta3_abs[i] >= zeta3[i])
                    m_ML2.addConstr(zeta4_abs[i] >= zeta4[i])
                    m_ML2.addConstr(zeta5_abs[i] >= zeta5[i]) 
                    m_ML2.addConstr(zeta6_abs[i] >= zeta6[i]) 
                    m_ML2.addConstr(zeta7_abs[i] >= zeta7[i]) 
                    m_ML2.addConstr(zeta8_abs[i] >= zeta8[i])
                    m_ML2.addConstr(zeta9_abs[i] >= zeta9[i]) 
                    m_ML2.addConstr(zeta10_abs[i] >= zeta10[i])  
                    m_ML2.addConstr(zeta11_abs[i] >= zeta11[i]) 
                    
                    m_ML2.addConstr(zeta0_abs[i] >= -zeta0[i])
                    m_ML2.addConstr(zeta1_abs[i] >= -zeta1[i])
                    m_ML2.addConstr(zeta2_abs[i] >= -zeta2[i])  
                    m_ML2.addConstr(zeta3_abs[i] >= -zeta3[i])
                    m_ML2.addConstr(zeta4_abs[i] >= -zeta4[i])
                    m_ML2.addConstr(zeta5_abs[i] >= -zeta5[i]) 
                    m_ML2.addConstr(zeta6_abs[i] >= -zeta6[i]) 
                    m_ML2.addConstr(zeta7_abs[i] >= -zeta7[i]) 
                    m_ML2.addConstr(zeta8_abs[i] >= -zeta8[i])
                    m_ML2.addConstr(zeta9_abs[i] >= -zeta9[i]) 
                    m_ML2.addConstr(zeta10_abs[i] >= -zeta10[i])  
                    m_ML2.addConstr(zeta11_abs[i] >= -zeta11[i])  
                    
                    #procurement decision 
                    #here, we write the optimal decision embedded in the constraint, we can also use if condition to make optimal decision
                    m_ML2.addConstr(Zeta_threshold[i] == zeta0[i]*X_feature_buy[i,0]+zeta1[i]*X_feature_buy[i,1]+zeta2[i]*X_feature_buy[i,2]+zeta3[i]*X_feature_buy[i,3]\
                                    +zeta4[i]*X_feature_buy[i,4]+zeta5[i]*X_feature_buy[i,5]+zeta6[i]*X_feature_buy[i,6]+zeta7[i]*X_feature_buy[i,7]\
                                    +zeta8[i]*X_feature_buy[i,8]+zeta9[i]*X_feature_buy[i,9]+zeta10[i]*X_feature_buy[i,10]+zeta11[i]*X_feature_buy[i,11])
                        
                    m_ML2.addConstr(q_spot[i] + q_fwd[i-1] == 1) 
                    #Here is the if statement    
                    m_ML2.addConstr(Zeta_threshold[i] >= pF[0,i] + eps - M*(1-q_fwd[i])) # if the forward price pF is greater than the threshold, don't buy
                    m_ML2.addConstr(Zeta_threshold[i] <= pF[0,i] + M*q_fwd[i]) # pF[i] here is to buy corn&soy for the next eta period   
                    
                    #Threshold Upperbar: Holding vs. Selling
                    #\bar{z}_2
                    if temp[0,i]+temp[4,i]-P[1,i]-P[3,i]<=0:
                        m_ML2.addConstr(z_2_x1[i] == 0)
                    else: 
                        m_ML2.addConstr(z_2_x1[i] == alpha0[i]*X_feature[i,0]+alpha1[i]*X_feature[i,1]+alpha2[i]*X_feature[i,2]+alpha3[i]*X_feature[i,3]
                                        +alpha4[i]*X_feature[i,4]+alpha5[i]*X_feature[i,5]+alpha6[i]*X_feature[i,6]+alpha7[i]*X_feature[i,7]
                                        +alpha8[i]*X_feature[i,8]+alpha9[i]*X_feature[i,9]+alpha10[i]*X_feature[i,10])
                    #\bar{z}_1    
                    if P[2,i] <= min(temp[1,i],temp[0,i]+temp[4,i])-P[3,i]:
                        m_ML2.addConstr(z_1_x1[i] == beta0[i]*X_feature[i,0]+beta1[i]*X_feature[i,1]+beta2[i]*X_feature[i,2]+beta3[i]*X_feature[i,3]
                                        +beta4[i]*X_feature[i,4]+beta5[i]*X_feature[i,5]+beta6[i]*X_feature[i,6]+beta7[i]*X_feature[i,7]
                                        +beta8[i]*X_feature[i,8]+beta9[i]*X_feature[i,9]+beta10[i]*X_feature[i,10])
                    elif  P[2,i]>=max(temp[1,i],temp[0,i]+temp[4,i])-P[3,i]: 
                        m_ML2.addConstr(z_1_x1[i] == 0)
                    else: 
                        m_ML2.addConstr(z_1_x1[i]  == beta0[i]*X_feature[i,0]+beta1[i]*X_feature[i,1]+beta2[i]*X_feature[i,2]+beta3[i]*X_feature[i,3]
                                        +beta4[i]*X_feature[i,4]+beta5[i]*X_feature[i,5]+beta6[i]*X_feature[i,6]+beta7[i]*X_feature[i,7]
                                        +beta8[i]*X_feature[i,8]+beta9[i]*X_feature[i,9]+beta10[i]*X_feature[i,10])
                    #Underbar: Holding vs. Fulfilling
                    #\underbar{z}_1  
                    if (1-alpha)* P[0,i]+P[4,i]<=min(temp[1,i],temp[0,i]+temp[4,i])-P[3,i]:
                        m_ML2.addConstr(z_1_y1[i] == gamma0[i]*X_feature[i,0]+gamma1[i]*X_feature[i,1]+gamma2[i]*X_feature[i,2]+gamma3[i]*X_feature[i,3]
                                        +gamma4[i]*X_feature[i,4]+gamma5[i]*X_feature[i,5]+gamma6[i]*X_feature[i,6]+gamma7[i]*X_feature[i,7]
                                        +gamma8[i]*X_feature[i,8]+gamma9[i]*X_feature[i,9]+gamma10[i]*X_feature[i,10])
                    elif (1-alpha)* P[0,i]+P[4,i]>=max(temp[1,i],temp[0,i]+temp[4,i])-P[3,i]:
                        m_ML2.addConstr(z_1_y1[i] == 0)
                    else: 
                        m_ML2.addConstr(z_1_y1[i] == gamma0[i]*X_feature[i,0]+gamma1[i]*X_feature[i,1]+gamma2[i]*X_feature[i,2]+gamma3[i]*X_feature[i,3]
                                        +gamma4[i]*X_feature[i,4]+gamma5[i]*X_feature[i,5]+gamma6[i]*X_feature[i,6]+gamma7[i]*X_feature[i,7]
                                        +gamma8[i]*X_feature[i,8]+gamma9[i]*X_feature[i,9]+gamma10[i]*X_feature[i,10])
                    
                    #decision variables 5 cases
                    if P[0,i]+P[4,i]<P[1,i]: 
                        if P[2,i]>(1-alpha)*P[0,i]+P[4,i]:            
                            m_ML2.addConstr(y1_1[i] == 0.)
                            m_ML2.addConstr(y2_1[i] == 0.)
                            m_ML2.addConstr(x1_10[i] == s1_1[i]-z_1_x1[i]) #auxiliary var to x1_1
                            m_ML2.addConstr(x1_1[i] == gp.max_(0., x1_10[i])) #the general constraints methods accept Var objects only, not sum/minus b/w two vars.
                            m_ML2.addConstr(x2_1[i] == s2_1[i]) 
                        else:
                            m_ML2.addConstr(y1_10[i] == s1_1[i]-z_1_y1[i])
                            m_ML2.addConstr(y1_11[i] == gp.max_(y1_10[i],0.))
                            m_ML2.addConstr(y1_1[i] == gp.min_(y1_10[i],q)) 
                            m_ML2.addConstr(x1_10[i] == s1_1[i]-z_1_x1[i]-q) 
                            m_ML2.addConstr(x1_1[i] == gp.max_(0., x1_10[i]))
                            m_ML2.addConstr(x2_1[i] == s2_1[i])
                    else: 
                        if P[2,i]>(1-alpha)*P[0,i]+P[4,i]:#case 2a
                            m_ML2.addConstr(y1_1[i] == 0.)
                            m_ML2.addConstr(y2_1[i] == gp.min_(s2_1[i], q))
                            m_ML2.addConstr(x1_10[i] == s1_1[i]-z_1_x1[i])
                            m_ML2.addConstr(x1_1[i] == gp.max_(0., x1_10[i])) 
                            m_ML2.addConstr(x2_10[i] == s2_1[i]-q-z_2_x1[i]) 
                            m_ML2.addConstr(x2_1[i] == gp.max_(0., x2_10[i])) 
                        elif P[2,i]>P[1,i]-alpha* P[0,i]: #case 2b
                            m_ML2.addConstr(y1_10[i] == s1_1[i]-z_1_y1[i])
                            m_ML2.addConstr(y1_11[i] == gp.max_(y1_10[i],0.))
                            m_ML2.addConstr(y1_12[i] == q-s2_1[i])
                            m_ML2.addConstr(y1_13[i] == gp.max_(y1_12[i],0.))
                            m_ML2.addConstr(y1_1[i] == gp.min_(y1_11[i], y1_13[i]))
                            
                            m_ML2.addConstr(y2_1[i] == gp.min_(s2_1[i], q))
                            
                            m_ML2.addConstr(x1_10[i] == s1_1[i]-z_1_x1[i]-y1_1[i])
                            m_ML2.addConstr(x1_1[i] == gp.max_(0., x1_10[i])) 
                            m_ML2.addConstr(x2_10[i] == s2_1[i]-q-z_2_x1[i]) 
                            m_ML2.addConstr(x2_1[i] == gp.max_(0., x2_10[i])) 
                        else: #case 2c
                            m_ML2.addConstr(y1_10[i] == s1_1[i]-z_1_y1[i])
                            m_ML2.addConstr(y1_11[i] == gp.max_(y1_10[i], 0.))   
                            m_ML2.addConstr(y1_1[i] == gp.min_(q, y1_11[i]))
                            
                            m_ML2.addConstr(y2_10[i] == q -s1_1[i]+z_1_y1[i])
                            m_ML2.addConstr(y2_11[i] == gp.min_(q, y2_10[i]))
                            m_ML2.addConstr(y2_12[i] == gp.max_(0., y2_11[i]))
                            m_ML2.addConstr(y2_1[i] == gp.min_(s2_1[i], y2_12[i]))
                            
                    m_ML2.addConstr(z1_1[i] == s1_1[i]-x1_1[i]-y1_1[i])   
                    m_ML2.addConstr(z2_1[i] == s2_1[i]-x2_1[i]-y2_1[i])    
                    #reward
                    m_ML2.addConstr(R_ML[i] == -P[3,i]*q_spot[i]-pF[0,i]*q_fwd[i] + P[0,i]*(y2_1[i]+(1-alpha)*y1_1[i])+P[1,i]*x2_1[i]+P[2,i]*x1_1[i]-P[3,i]*(z1_1[i]+z2_1[i])-P[4,i]*(q-y1_1[i]-y2_1[i])-P[3,i]*(w1[i]+w2[i])
                                    - alpha0_abs[i] - alpha1_abs[i] - alpha2_abs[i]- alpha3_abs[i]- alpha4_abs[i]- alpha5_abs[i]- alpha6_abs[i]
                                    - alpha7_abs[i]- alpha8_abs[i]- alpha9_abs[i]- alpha10_abs[i]- beta0_abs[i] - beta1_abs[i] - beta2_abs[i]
                                    - beta3_abs[i] - beta4_abs[i] - beta5_abs[i] - beta6_abs[i] - beta7_abs[i] - beta8_abs[i] - beta9_abs[i]
                                    - beta10_abs[i]- gamma0_abs[i] - gamma1_abs[i] - gamma2_abs[i] - gamma3_abs[i] - gamma4_abs[i] - gamma5_abs[i]
                                    - gamma6_abs[i] - gamma7_abs[i] - gamma8_abs[i] - gamma9_abs[i] - gamma10_abs[i] - zeta0_abs[i] - zeta1_abs[i] - zeta2_abs[i] - zeta3_abs[i] - zeta4_abs[i] 
                                    - zeta5_abs[i] - zeta6_abs[i] - zeta7_abs[i] - zeta8_abs[i] - zeta9_abs[i] - zeta10_abs[i] - zeta11_abs[i])  
            
                   
                m_ML2.setObjective(R_ML.sum(), GRB.MAXIMIZE)         
                # Optimize model
                m_ML2.optimize()
                V2[n] = m_ML2.objVal
                    
                for v in m_ML2.getVars():
                    print('%s %g' % (v.varName, v.x))
            
                print('Obj: %g' % m_ML2.objVal)
            
            except gp.GurobiError as e:
                print('Error code ' + str(e.errno) + ': ' + str(e))
            
            except AttributeError:
                print('Encountered an attribute error')

        V_ML[T-1] = np.mean(V2)         
            
    # Plot
    import numpy as np
    import matplotlib.pyplot as plt
    fig, ax = plt.subplots()
    V_MLSort = np.sort(V_ML).T
    x = np.array([np.linspace(2,Tmax, Tmax-1)]).T 
    plt.plot(x,V_MLSort, label = "ML") 
    plt.title("Performance of Each Policy")
    plt.ylabel("Accumulated Profit")
    plt.xlabel("Time Horizon (T)")
    ax.legend()
    plt.show()    
         
            
            
            
            
            
            
            
            
            
            
            
            
            

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    You are solving many models with randomly generated data. It is possible that for some data, the model is harder than it is on average. It may have to do with numerical issues arising from random coefficients or a particular data set is simply really hard to solve.

    You are currently solving up to the default gap of 0.01% which is very small given that you objective function values are > 1e+07. Do you really need such a tight MIPGap? If not, you should try setting the desired MIPGap to, e.g., 0.1%

    m_ML2.setParam("MIPGap",0.001)

    This may already solve your issue however it is not guaranteed that it does. There may be random data sets which make your model hard to solve up to 0.1% gap as well. You could try to improve the numerics of your model a bit. This may help, cf. Guidelines for Numerical Issues.

    What you also could try is to write every model as an LP/MPS file via the write method and have a closer look at a model which takes a long time to solve. Once you identified one such case, you can have a closer look at this particular models numerics and maybe find parameters to improve its performance. You can then try whether the parameters you found also work in general for your whole loop.

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Thank you so much, Jaromił! I tried all of your suggestions, and they are very helpful. 

    I still have one question: 

    Adjusting the MIPGap is very useful; I changed the gap to 0.2%. But still, there are some instances where the gap is very hard to narrow down (It took 10 hours for one instance to converge to a 0.7% gap.) I wonder if I can set for such instance to a larger gap, e.g., set a 0.7% gap for this kind of harder-to-solve instance (the one that takes more than 10 hours to solve), but keep a 0.2% gap for the rest of the instances?

    Really appreciated!

    Best,

    Ye

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Ye,

    Changing the MIPGap parameter "on the fly" is not supported. Still, there is a solution to your situation.

    You can implement a callback, where you query the current best solution (\(z_P\)) and best bound (\(z_D\)). You check the current runtime in the callback and if the runtime is too large, you can compute the MIPGap value manually with the use of the best solution and best bound values. You can then manually terminate the optimization process from inside the callback function, once some termination criterion of your choice has been met.

    A simplistic code snippet might look something like

    def mycallback(model, where):
    if where == GRB.Callback.MIPNODE:
    runtime = model.cbGet(GRB.Callback.RUNTIME)
    bestobj = model.cbGet(GRB.Callback.MIPSOL_OBJBST)
    bestbnd = model.cbGet(GRB.Callback.MIPSOL_OBJBND)
    # compute mipgap as described in the documentation
    # add some termination criteria based on runtime + mipgap
    # and terminate model if one of the criteria is met

    An alternative would be to query the log message in a MESSAGE callback and extract the MIPGap value computed by Gurobi. Then based on this value together with some runtime criterion, terminate the optimization from inside the MESSAGE callback.

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Hi Jaromił,

    Thanks a lot for your solution! However, when I implemented the code, sometimes I have the following message: Encountered an attribute error. I don't know if I made a mistake somewhere:

    I put the following code before I define my model:

    from gurobipy import *        
    def mycallback(model, where):
                if where == GRB.Callback.MIPNODE:
                    runtime = model.cbGet(GRB.Callback.RUNTIME)
                    bestobj = model.cbGet(GRB.Callback.MIPSOL_OBJBST)
                    bestbnd = model.cbGet(GRB.Callback.MIPSOL_OBJBND)
                    gap = (bestbnd-bestobj)/bestobj# compute mipgap as described in the documentation
                    if runtime >= 10 and gap <= 0.6:
                       model.terminate()
    [...my model]
    m_ML2.optimize(mycallback)

    Then I have the following message which I didn't see when I didn't use the callback:

    Thank you so much!

    Best,

    Ye

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Ye,

    Your model is infeasible or unbounded. This means that you cannot access the solution attributes. Thus you get an attribute error. I guess it is similar to the issue we discussed in this thread before.

    You should also check whether the bestbnd and bestobj you get are finite values to avoid numerical errors. To check whether a feasible point is available, you can check whether MIPNODE_SOLCNT callback attribute is > 0.

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Hi Jaromił,

    Thank you so much! Yes, you are right, I used the iis.ilp file to find out that I forgot to relax the bound to be negative infinite. One minor issue is that iis.ilp file cannot overwrite, each time I need to rename the file if I want to produce the most updated ilp file when I run the code.

    Also, it seems that the Callback code I am writing doesn't work. Under the condition

                    if runtime >= 200 and gap <= 0.6:
                       model.terminate()# 

    My model still keeps running even if those two conditions have been satisfied (see picture below).

    My code looks like this:

            from gurobipy import *
            def mycallback(model, where):
                if where == GRB.Callback.MIPNODE:
                    count = model.cbGet(GRB.Callback.MIPNODE_SOLCNT)
                    runtime = model.cbGet(GRB.Callback.RUNTIME)
                    bestobj = model.cbGet(GRB.Callback.MIPSOL_OBJBST)
                    bestbnd = model.cbGet(GRB.Callback.MIPSOL_OBJBND)
                    gap = (bestbnd-bestobj)/bestobj# compute mipgap as described in the documentation
                    print("Count",count)
                    if runtime >= 200 and gap <= 0.6:
                       model.terminate()# add some termination criteria based on runtime + mipgap
                    # and terminate model if one of the criteria is met

    [then I define my model...]

    Thank you so much!

    Best,

    Ye

     

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Ye,

    You are in the MIPNODE callback, but you are trying to access to MIPSOL_OBJBST and MIPSOL_OBJBND values. This won't work. You have to access MIPNODE_OBJBST and MIPNODE_OBJBND.

    I just noticed that I made the mistake originally, so this one is on me :)

    Just to be sure, you should take the absolute values of the nominator and denominator when computing the MIPGap value, see MIPGap. There are also many special cases when computing the MIPGap such as "what to do when the denominator is 0", but I don't think that these apply in your case.

    Best regards, 
    Jaromił

    0
  • Ye Liu
    Gurobi-versary
    Conversationalist
    First Question

    Hi Jaromił, 

    Thanks for responding! I tried the following but still didn't work

           from gurobipy import *
            def mycallback(model, where):
                if where == GRB.Callback.MIPNODE:
                    count = model.cbGet(GRB.Callback.MIPNODE_SOLCNT)
                    runtime = model.cbGet(GRB.Callback.RUNTIME)
                if where == GRB.Callback.MIPSOL_OBJBST:
                    bestobj = model.cbGet(GRB.Callback.MIPSOL_OBJBST)
                if where == GRB.Callback.MIPSOL_OBJBND:
                    bestbnd = model.cbGet(GRB.Callback.MIPSOL_OBJBND)
                    
                    gap = gp.abs_(bestbnd-bestobj)/gp.abs_(bestobj)# compute mipgap as described in the documentation
                    
                    if runtime >= 200 and gap <= 0.6:
                       model.terminate()# add some termination criteria based on runtime + mipgap
    [my model...]

    Could you help me write an example? In each run, the model termination criteria are the runtime is >200s and the mipgap is less than 0.6%.

    I check all the documents related to MIPGapMIPNODE_SOLCNT callback attribute and other related ones, but still didn't find a suitable example that works for me. 

    Thank you so much!

    Best,

    Ye

    0

Please sign in to leave a comment.