Iterative Approach change the optimal result
AnsweredHi,
I hope my post is in the correct community section. I think this post is a continuation of my previous question [Here], where at that post I was wondering about the non-integer optimal solution that I expected to be an integer.
Then, I came up with the idea to do an iterative approach hoping the non-integer solution would close to 1 or 0. Here is the list of code that I have:
import numpy as np
import gurobipy as gp
from gurobipy import GRB
# Define QP formulation
def qp_calculation(values, clstr_size):
sensing_range = np.array(values)
cluster_size = clstr_size
agent_num = sensing_range.shape[0]
task_num = len(cluster_size)
# Set parameter for objective function 1
alpha = 0.3
q = np.kron(np.eye(task_num),sensing_range.T )
eigen_q = np.linalg.eigvals(q.T @ q)
real_eig = np.real(eigen_q)
# Q below is still Indefinite
Q = (q.T @ q)
# Positive Definite via Eigendecomposition
eigVal, eigVec = np.linalg.eigh(Q)
qp = eigVec.real @ (np.diag(eigVal)-(np.min(eigVal.real) - alpha) * np.eye(len(eigVal))) @ np.linalg.inv(eigVec.real)
# # Make it symmetric
Q = 0.5 * ((qp + qp.T))
p = -2 * (cluster_size.T @ q)
return Q, p
# Task allocation using QP
def qp_allocation(values, cluster_size, Q, p):
model = gp.Model("qp_allocation")
model.Params.LogToConsole = 0
model.Params.Presolve = 2
model.Params.LPWarmStart = 2
model.Params.Method = 0
model.setParam('FeasibilityTol', 1e-8) # More strict feasibility tolerance
model.setParam('OptimalityTol', 1e-8) # More strict optimality tolerance
x = model.addMVar((len(values)*len(cluster_size)), vtype=GRB.CONTINUOUS, name="x")
# Set the linear constraints
A1 = np.kron(np.eye(len(values)), np.ones((1,len(cluster_size))))
A2 = np.kron(np.ones((1,len(values))), np.eye(len(cluster_size)))
b1 = np.ones((1,len(values))).T
b2 = np.ones((1,len(cluster_size))).T
model.addConstr(A1 @ x == b1, name="c1")
model.addConstr(A2 @ x >= b2, name="c2")
# Define the objective function
obj1 = 0.5 * x.T @ Q @ x + p.T @ x
model.setObjective(obj1, sense=gp.GRB.MINIMIZE)
model.write("myQP.mps")
model.optimize()
x_values = x.X
return x_values
def iterative(sensing_range, cluster_size, Q, p):
print("Print the first optimization")
model = gp.Model("qp_allocation")
model.Params.LogToConsole = 0
agent_num = len(sensing_range)
task_num = len(cluster_size)
x = model.addMVar((agent_num * task_num), vtype=GRB.CONTINUOUS, name="x")
model.update()
# Constraints setup
A1 = np.kron(np.eye(agent_num), np.ones((1, task_num)))
A2 = np.kron(np.ones((1, agent_num)), np.eye(task_num))
b1 = np.ones(agent_num)
b2 = np.ones(task_num)
model.addConstr(A1 @ x == b1, name="c1")
model.addConstr(A2 @ x >= b2, name="c2")
# Set or update the objective in the model
obj = 0.5 * x @ Q @ x + p @ x
model.setObjective(obj, GRB.MINIMIZE)
model.update()
model.optimize()
x_prev = x.X
print(f"Iterative QP Solution: {x_prev}")
tolerance = 0.05 # Tolerance for determining 'closeness' to integers
integer_status = np.allclose(x_prev, np.round(x_prev), atol=tolerance)
print("Initial variables are close to integer values:", integer_status)
# Iterative update
num_iter = 10
if not integer_status:
for k in range(num_iter):
x_new = x_prev
# Update the objective function
model.setObjective(0.5 * x_new @ Q @ x_new + p @ x_new, GRB.MINIMIZE)
model.optimize()
x_update = x.X
x_prev = x_update
print(f"Iteration {k} QP Solution:", x_update)
# Check if the new solution is close to integers
if np.allclose(x_prev, np.round(x_prev), atol=tolerance):
print("Variables are close to integer values after iteration", k)
break
k += 1
else:
print("Keep the previous optimal solution")
return x_prev
## -----------------------------------------------------------------------------------------
# MAIN FUNCTION
def main():
# Parameters
np.random.seed(42)
sensing_range = np.random.randint(35, 45, size=6)
# Initialize the cluster size
# cluster_size = np.array([36., 44., 50.50, 64.5, 44.50, 37. ])
cluster_size = np.array([51., 42.50, 62., 76.50, 52.50 ])
# cluster_size = np.array([62., 97.50, 95.50, 59.])
# cluster_size = np.array([117.5, 87.50, 119.])
# cluster_size = np.array([187., 165.5])
Q, p = qp_calculation(sensing_range, cluster_size)
x_values = qp_allocation(sensing_range, cluster_size, Q, p)
print("QP allocation: ", x_values)
# Run iterative algorithm
opt = iterative(sensing_range, cluster_size, Q, p)
print("QP allocation: ", x_values)
print("Opt sol: ", (opt))
# Print the assignment matrix
qp_assignment = np.zeros((len(sensing_range), len(cluster_size)))
for i in range(len(sensing_range)):
for j in range(len(cluster_size)):
qp_assignment[i, j] = x_values[i * len(cluster_size) + j]
qpAgentDict = {}
# Find the column indices with maximum values in each row
max_indices = np.argmax(qp_assignment, axis=1)
for row_idx, col_idx in enumerate(max_indices):
if col_idx in qpAgentDict:
qpAgentDict[col_idx].append(row_idx)
else:
qpAgentDict[col_idx] = [row_idx]
# Sort the assignment
print("QP Allocation")
print(sorted(qpAgentDict.items()))
if __name__ == "__main__":
main()
The result for the without iteration is
QP allocation: [0.75987536 0. 0.24012464 0. 0. 0.24012464
0.75987536 0. 0. 0. 0. 0.
0.50721828 0. 0.49278172 0. 0. 0.25265708
0.74734292 0. 0. 0. 0. 1.
0. 0. 0.24012464 0. 0. 0.75987536]
which is the same as the result I obtained in the link above.
When I run with the iterative, I get the integer solution as I wish. The result is
Opt sol: [0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 1. 0. 0. 0. 0. 1. 0. 0. 0.
0. 0. 0. 1. 0. 0.]
But I am wondering why the direction of the optimal solution changes from the first one. Then, my question is:
- Is the step I used for the iterative method correct for an iterative?
- Is it possible for the optimal solution to change from one result to another different result?
Thank you
Muhamad
-
Hi Muhamad,
As explained by my colleague, Jaromił, in your previous post, the guarantee of an integral optimal solution provided by totally unimodular constraint matrices applies strictly to linear programming. It does not carry over to quadratic programming problems.
The optimal solution to the model implemented in function \(\texttt{qp_allocation}\) does not lie at a vertex of the feasible region and therefore is not integral.
If you need an integral solution, you must formulate your problem as an integer quadratic programming problem by forcing the decision variables to take integral values.
In your iterative approach, you initially solve the same model as implemented in function \(\texttt{qp_allocation}\). You then update the objective function to a constant value in the next iteration. In this iteration, the updated model is a linear programming model with an unimodular constraint matrix and constant objective. This is guaranteed to give you an integral optimal solution. The optimal solution to this LP is just a feasible solution to the original QP.
Best regards,
Maliheh
0 -
Hi Maliheh,
Thanks for your answer.
I have some questions, especially regarding your comment. Let me highlight it first."If you need an integral solution, you must formulate your problem as an integer quadratic programming problem by forcing the decision variables to take integral values"
Does it mean, I have to set x to be GRB.INTEGER?
" In this iteration, the updated model is a linear programming model with an unimodular constraint matrix and constant objective"
Even I define the updated model is the same QP it is a linear programming? It is interesting because I thought it will still as QP.
" This is guaranteed to give you an integral optimal solution. The optimal solution to this LP is just a feasible solution to the original QP."
What does it mean by just a feasible solution? If the solution is lying on a vertex doesn't it make the solution is optimal even for QP?
This part I still don't understand, I hope to get some insight from youThank you
Best regards
Muhamad0 -
Does it mean, I have to set x to be GRB.INTEGER?
Yes, you need to set the decision variables to either binary or integer, depending on your application.
Even I define the updated model is the same QP it is a linear programming? It is interesting because I thought it will still as QP.
You are setting the objective function in your iterative loop (see below) where \(\texttt{x_new}\) is a numpy ndarray, not a Gurobi MVar object. This is why the expression evaluates to a constant rather than a Gurobi quadratic expression. You can query the objective function expression by calling the method model.getObjective() after optimizing it to verify this.
model.setObjective(0.5 * x_new @ Q @ x_new + p @ x_new, GRB.MINIMIZE)
What does it mean by just a feasible solution? If the solution is lying on a vertex doesn't it make the solution is optimal even for QP?
The new model has a constant objective function and a set of linear constraints, making it a feasibility LP problem since there is no objective function to optimize. This solution is feasible for the QP because it shares the same constraints as the QP model.
As mentioned several times, the optimal solution to a QP with a totally unimodular constraint matrix does not necessarily lie at a vertex of the feasible region. I am not sure why you still expect a feasible vertex solution to be optimal.Best regards,
Maliheh
0
Please sign in to leave a comment.
Comments
3 comments