Solving Least Squares yields wrong result
AnsweredI tried to solve a linear least square problem in Gurobi as follows:
# A, b are numpy arrays
n1, n2 = A.shape # shape of b is (n1,)
m = gp.Model()
x = m.addMVar(shape=n2, lb=float('-inf'))
Q = A.T @ A
c = -2 * b.T @ A
obj = x @ Q @ x + c @ x + b.T @ b
m.setObjective(obj, GRB.MINIMIZE)
m.optimize()
While this works for small examples, it fails for specific larger arrays (n1 = 71002, n2=502) with the error
Numerical trouble encountered
Model may be infeasible or unbounded. Consider using the
homogeneous algorithm (through parameter 'BarHomogeneous')
Solving with
m.Params.BarHomogeneous = 1
yields the error "Unbounded model". But by definition of the least squares problem, the objective function can obviously not be negative.
Any ideas what happens here? Can floating point errors in the computation of Q and c modify the formulation such that it is unbounded? Thanks for help.
Edit: Here are the arrays A and b: https://www.dropbox.com/sh/ngzfdh2pofelizu/AAAXOS1rHyXMV27Hc2_fEQRNa?dl=0
-
Hi Yannic,
Least squares problems can achieve negative values and even be unbounded.This can be found in, e.g., Boyd's Convex Optimization book (Example 3.9 on page 81).
In your case, the unboundedness is very likely caused by the numerics of your model (assuming that the problem defined by your \(\texttt{A}\) matrix is indeed bounded). The quadratic ranges are pretty large
Objective range [3e+01, 3e+04]
QObjective range [8e-01, 3e+05]and the coefficients have a lot of seemingly significant digits, given their rather large order of magnitude.
You could try reformulating your model as
\[\begin{align}\min_{z_i,x_i} &\sum w_i \cdot z_i^2 \\
\text{s.t. } &z_i = a_i^T x - b_i \quad \forall i \end{align}\]which might help in tackling the numerical issues.
Best regards,
Jaromił0
Please sign in to leave a comment.
Comments
1 comment