L0 regression example - MIQP question
I'm new to Gurobi and MIQP optimization in general. I was testing the L0 regression example posted here: https://github.com/Gurobi/modeling-examples/tree/master/linear_regression.
In the post and code they optimize the expanded form of minimizing the sum of square errors:
\[\beta^T X^T X\beta- 2y^TX\beta+y^T y\]
Is there some reason why the squared error shouldn't be calculated, squared, and optimized directly?
\[e_0 = (y-X\beta)^T (y-X\beta)\]
I was able to achieve a similar (if not exact result with the second method).
0
-
Official comment
This post is more than three years old. Some information may not be up to date. For current information, please check the Gurobi Documentation or Knowledge Base. If you need more help, please create a new post in the community forum. Or why not try our AI Gurobot?.
Post is closed for comments.
Comments
1 comment