Quadratic programming (polynomial objective and linear constraints) worst-case time complexity
AnsweredI submitted a request a week ago. I don't know whether it is because I submitted it to the wrong community, and I did not get any response. So I resubmit the request here.
I have a question about the quadratic programming problem (polynomial objective and linear constraints).
I solve it with GUROBI. I wonder what's the worst time complexity of the parallel barrier algorithm. I could not find any reference on google scholar on this method. I know another widely used interior point method, the primal-dual interior point method has a time complexity of \(O(\sqrt{n} \ln(1/\epsilon))\),
where n is the number of inequality constraints. Does the parallel barrier algorithm have the same time complexity? If so, could you please recommend a reference for the parallel barrier algorithm?
If not, could please show the worst-case time complexity of the algorithm, and also recommend some references?
Thank you so much for your time.
-
Gurobi's barrier algorithms all follow the general predictor-corrector scheme, and any complexity bound on the number of Newton steps from an delta neighborhood of the central path to an epsilon neighborhood of the analytic center of the optimal face applies. For example, in the LP case the classical bounds in [1, Chap. 3,5&6] apply. For QP and other conic problems, all worst case bounds I know off the top of my head target instead short-step methods, yielding variations of the complexity bound you mention above, e.g., [2, Chap.6]. So strictly speaking these don't apply to our framework. The parallelism of our methods does not have any influence on these theoretic Newton complexity bounds.
The second aspect of complexity in this context is the per-step computational complexity bound. Without further assumptions on the data no better bound than O(n^3 + m*n^2) can be given. Parallelism directly affects this bound: Under the (unrealistic) assumption of perfect parallel speedup the complexity could be phrased as O((n^3 + m*n^2)/p) for p independent processors. In practice, however, the achievable parallelism is less and behaves nonlinear in p (Amdahl's law).
Overall it is very important to understand that such theoretical complexity bounds may not reflect the actual behaviour of an *implementation* of an algorithm: Newton systems may not be solvable to sufficient accuracy to follow the analysis, constraints may be effectively linearily dependent, the interior may effectively empty, etc. And the worst-case per-iteration complexity is often far off the reality, because it cannot take into account the sparsity of A. So while the art of designing complexity bounds is a sophisticated and interesting research topic, a direct transfer to real-world behavior is most likely misleading.
[1] S. J. Wright, Primal-dual interior-point methods. Philadelphia, PA: SIAM
[2] A. Ben-Tal and A. Nemirovski, Lectures on modern convex optimization. Analysis, algorithms, and engineering applications. Philadelphia, PA: SIAM
0
Please sign in to leave a comment.
Comments
1 comment