# L0 regression example - MIQP question

I'm new to Gurobi and MIQP optimization in general. I was testing the L0 regression example posted here: https://github.com/Gurobi/modeling-examples/tree/master/linear_regression.

In the post and code they optimize the expanded form of minimizing the sum of square errors:

\[\beta^T X^T X\beta- 2y^TX\beta+y^T y\]

Is there some reason why the squared error shouldn't be calculated, squared, and optimized directly?

\[e_0 = (y-X\beta)^T (y-X\beta)\]

I was able to achieve a similar (if not exact result with the second method).

0

Please sign in to leave a comment.

## Comments

0 comments