Escaping Saddle Points in Constrained Optimization

In this paper, we focus on escaping from saddle points in smooth nonconvex optimization problems subject to a convex set . We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set is simple for a quadratic objective function. To be more precise, our results hold if one can find a -approximate solution of a quadratic program subject to in polynomial time, where is a positive constant that depends on the structure of the set . Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an -second order stationary point (SOSP) in at most iterations. We further characterize the overall arithmetic operations to reach an SOSP when the convex set can be written as a set of quadratic constraints. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an -SOSP.
View on arXiv