Skip to main content

Fractional Solution in Continuous Relaxation of Convex Quadratic Programming

Answered

Comments

8 comments

  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Muhamad,

    I got the optimal solution integer and sometimes it is fractional. I am quite curious is it possible for my case (convex quadratic) to get a fractional solution instead of integer even though I use simplex method?

    Why should the optimal solution be integer when all variables are allowed to be continuous? Do you have a specific constraint which should make sure that a particular variable attains an integer value? Note that (due to numerical tolerances) there can exist multiple optimal solutions even for convex quadratic models, i.e., there can be an optimal solution with integer values and one with non-integer values and both can have the same objective value (within tolerances). An example would be

    \[\begin{align*}
    \min\,\, &y\\
     \text{s.t.}\,\, &y \geq x^2\\
                   &y \geq 4
    \end{align*}\]

    The solution points \((0,4)\) and \((0.5,4)\) are both feasible and have the same objective value \(4\). The model is convex quadratic.

    Best regards, 
    Jaromił

    0
  • Muhamad Fikri
    Gurobi-versary
    Curious
    Conversationalist

    Hi Jaromil,

    Thank you for the explanation!

    Yes, I want to know the possibility of convex quadratic to get integer optimal solutions. In my case, the constraints are linear and hold a property of total unimodularity. When I tried my LP with continuous relaxation, the optimal solutions were all integers. But when the problem is convex quadratic, the continuous relaxation result has a fractional (non-integer) solution.

    In this case, is there may be something that I can tweak from the Gurobi parameter to achieve the target?

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Muhamad,

    In this case, is there may be something that I can tweak from the Gurobi parameter to achieve the target?

    Unfortunately, there is no such parameter. Are the solution point values far away from integer values? Note that numerical algorithms always work with machine precision and tolerances such that perfectly discrete numbers may not be achievable, but a value of 0.99999999 can be usually seen as 1.

    Could you please share the model such that we could have a closer look? Note that uploading files in the Community Forum is not possible but we discuss an alternative in Posting to the Community Forum.

    Best regards, 
    Jaromił

    0
  • Muhamad Fikri
    Gurobi-versary
    Curious
    Conversationalist

    Hi Jaromil,

    Sure, here is the model that I develop

    import numpy as np
    import gurobipy as gp
    from gurobipy import GRB


    # Define QP formulation
    def qp_calculation(values, clstr_size):
        sensing_range = np.array(values)
        cluster_size = clstr_size

        agent_num = sensing_range.shape[0]
        task_num = len(cluster_size)

        # Set parameter for objective function 1
        alpha = 0.3
        q = np.kron(np.eye(task_num),sensing_range.T )

        eigen_q = np.linalg.eigvals(q.T @ q)
        real_eig = np.real(eigen_q)

      # Q below is still Indefinite
        Q = (q.T @ q)
        # Positive Definite via Eigendecomposition
        eigVal, eigVec = np.linalg.eigh(Q)
        qp = eigVec.real @ (np.diag(eigVal)-(np.min(eigVal.real) - alpha) * np.eye(len(eigVal))) @ np.linalg.inv(eigVec.real)
        # # Make it symmetric
        Q = 0.5 * ((qp + qp.T))
        p = -2 * (cluster_size.T @ q)

        return Q, p

    # Task allocation using QP
    def qp_allocation(values, cluster_size, Q, p):        
        model = gp.Model("qp_allocation")
        model.Params.LogToConsole = 0
        model.Params.Presolve = 2
        model.Params.LPWarmStart = 2
        model.Params.Method = 0
        model.setParam('FeasibilityTol', 1e-8)  # More strict feasibility tolerance
        model.setParam('OptimalityTol', 1e-8)   # More strict optimality tolerance


        x = model.addMVar((len(values)*len(cluster_size)), vtype=GRB.CONTINUOUS, name="x")

        # Set the linear constraints
        A1 = np.kron(np.eye(len(values)), np.ones((1,len(cluster_size))))
        A2 = np.kron(np.ones((1,len(values))), np.eye(len(cluster_size)))
        b1 = np.ones((1,len(values))).T
        b2 = np.ones((1,len(cluster_size))).T
        model.addConstr(A1 @ x == b1, name="c1")
        model.addConstr(A2 @ x >= b2, name="c2")


        # Define the objective function
        obj1 = 0.5 * x.T @ Q @ x + p.T @ x
        model.setObjective(obj1, sense=gp.GRB.MINIMIZE)
        model.write("myQP.mps")
        model.optimize()
        x_values = x.X

        return x_values


    ## -----------------------------------------------------------------------------------------
    # MAIN FUNCTION


    def main():
        # Parameters
        np.random.seed(42)
        sensing_range = np.random.randint(35, 45, size=6)
        # Initialize the cluster size
        # cluster_size = np.array([36., 44., 50.50, 64.5, 44.50, 37.  ])
        # cluster_size = np.array([51., 42.50, 62., 76.50, 52.50  ])
        # cluster_size = np.array([62., 97.50, 95.50, 59.])
        # cluster_size = np.array([117.5, 87.50, 119.])
        cluster_size = np.array([187., 165.5])


        Q, p  = qp_calculation(sensing_range, cluster_size)
        x_values = qp_allocation(sensing_range, cluster_size, Q, p)
      print("QP allocation: ", x_values)
        # print("Q:", Q)
       
        # Print the assignment matrix
        qp_assignment = np.zeros((len(sensing_range), len(cluster_size)))
        for i in range(len(sensing_range)):
            for j in range(len(cluster_size)):
                qp_assignment[i, j] = x_values[i * len(cluster_size) + j]
                   
        # print("Assignment matrix \n ", qp_assignment)


        qpAgentDict = {}
        # Find the column indices with maximum values in each row
        max_indices = np.argmax(qp_assignment, axis=1)
        # Iterate through rows and add to the result_dict
        for row_idx, col_idx in enumerate(max_indices):
            if col_idx in qpAgentDict:
                qpAgentDict[col_idx].append(row_idx)
            else:
                qpAgentDict[col_idx] = [row_idx]
        # Sort the assignment
        print("QP Allocation")
        print(sorted(qpAgentDict.items()))


    if __name__ == "__main__":
        main()

    The solution point value when the size of cluster_size == sensing_range = 6

    [0.  1.  0.  0.  0.  0.  0.5 0.  0.  0.  0.5 0.  0.  0.  1.  0.  0.  0.
     0.  0.  0.  0.  0.  1.  0.5 0.  0.  0.  0.5 0.  0.  0.  0.  1.  0.  0. ]

    QP Allocation
    [(0, [1]), (1, [0]), (2, [2]), (3, [5]), (4, [4]), (5, [3])]

    when the size of cluster_size = 5, sensing_range = 6

    [0.75987536 0.         0.24012464 0.         0.         0.24012464
     0.75987536 0.         0.         0.         0.         0.
     0.50721828 0.         0.49278172 0.         0.         0.25265708
     0.74734292 0.         0.         0.         0.         1.
     0.         0.         0.24012464 0.         0.         0.75987536]

    QP Allocation
    [(0, [0]), (1, [1]), (2, [2]), (3, [3, 4]), (4, [5])]

    The point is quite far from 1, I guess.

    Thank you

    Best regards

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Which Gurobi version are you using? With v11.0.0 when I run your code I get

    Set parameter Presolve to value 2
    Set parameter LPWarmStart to value 2
    Set parameter Method to value 0
    Set parameter FeasibilityTol to value 1e-08
    Set parameter OptimalityTol to value 1e-08
    Gurobi Optimizer version 11.0.0 build v11.0.0rc2 (mac64[x86] - Darwin 23.4.0 23E224)

    CPU model: Intel(R) Core(TM) i5-1038NG7 CPU @ 2.00GHz
    Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

    Optimize a model with 40 rows, 12 columns and 96 nonzeros
    Model fingerprint: 0x96fc9454
    Model has 42 quadratic objective terms
    Coefficient statistics:
      Matrix range     [1e+00, 1e+00]
      Objective range  [1e+04, 2e+04]
      QObjective range [1e+03, 4e+03]
      Bounds range     [0e+00, 0e+00]
      RHS range        [1e+00, 1e+00]
    Presolve time: 0.00s
    Presolved: 8 rows, 12 columns, 24 nonzeros
    Presolved model has 42 quadratic objective terms

    Iteration    Objective       Primal Inf.    Dual Inf.      Time
           0    0.0000000e+00   0.000000e+00   0.000000e+00      0s
           0   -7.1288600e+04   0.000000e+00   4.384277e-01      0s
           7   -7.3405100e+04   0.000000e+00   0.000000e+00      0s

    Solved in 7 iterations and 0.00 seconds (0.00 work units)
    Optimal objective -7.340510000e+04
    x_values: [1. 0. 1. 0. 0. 1. 1. 0. 1. 0. 0. 1.]
    QP allocation:  [1. 0. 1. 0. 0. 1. 1. 0. 1. 0. 0. 1.]
    QP Allocation
    [(0, [0, 1, 3, 4]), (1, [2, 5])]

    Please note that I changed

    model = gp.Model("qp_allocation")
    #model.Params.LogToConsole = 0
    model.Params.Presolve = 2
    ...
    # Define the objective function
    obj1 = 0.5 * x.T @ Q @ x + p.T @ x
    model.setObjective(obj1, sense=gp.GRB.MINIMIZE)
    model.write("myQP.mps")
    model.optimize()
    model.write("myLP.lp")
    x_values = x.X

    print(f"x_values: {x_values}")

    return x_values

    to get the output. So interestingly on my side, the solution is perfectly integral.

    If you are using an older version of Gurobi, then it is possible that some bug has been fixed in the meantime and it is no longer present with v11.0.0. Please also note that I get the same result using v11.0.1.

    Best regards, 
    Jaromił

    0
  • Muhamad Fikri
    Gurobi-versary
    Curious
    Conversationalist

    Hi Jaromil,

    When I used one of these three values

        # cluster_size = np.array([62., 97.50, 95.50, 59.])
        # cluster_size = np.array([117.5, 87.50, 119.])
        cluster_size = np.array([187., 165.5])

    the output was perfectly integral. But if the cluster size I use the two below

        # cluster_size = np.array([36., 44., 50.50, 64.5, 44.50, 37.  ])
        # cluster_size = np.array([51., 42.50, 62., 76.50, 52.50  ])

    The output result as I reported to you before. Could you give it a try again? My Gurobi version is 11.0.0. Maybe the result is still integral in your side.

     

    Thank you

    Best regards

    Muhamad

    0
  • Jaromił Najman
    Gurobi Staff Gurobi Staff

    Hi Muhamad,

    As you already mentioned, the \(A\) matrix is total unimodular. This is enough for LPs to guarantee discrete solution point values. However, for quadratic objectives, the optimal solution point does not have to lie at a vertex of the feasible set. The stack exchange post Total Unimodularity in Integer QP discusses this issue and there is also a reference to a paper about total unimodularity in QPs.

    Note that for the cluster size

    cluster_size = np.array([36., 44., 50.50, 64.5, 44.50, 37.  ])

    you will get an optimal objective value of

    Optimal objective -1.778275000e+04

    when all variables are continuous and an optimal objective value of

    Best objective -1.778260000000e+04, best bound -1.778260000000e+04, gap 0.0000%

    when all variables are set to be binary.

    So indeed, it is the case the the optimal solution point does not lie at a vertex of the feasible set for your quadratic convex objective function.

    Best regards, 
    Jaromił

    0
  • Muhamad Fikri
    Gurobi-versary
    Curious
    Conversationalist

    Hi Jaromil,

    Thank you for highlighting the result and reference from stack exchange!

    Best regards

    Muhamad

    0

Please sign in to leave a comment.