13
340

Lower Bounds for Non-Convex Stochastic Optimization

Abstract

We lower bound the complexity of finding ϵ\epsilon-stationary points (with gradient norm at most ϵ\epsilon) using stochastic first-order methods. In a well-studied model where algorithms access smooth, potentially non-convex functions through queries to an unbiased stochastic gradient oracle with bounded variance, we prove that (in the worst case) any algorithm requires at least ϵ4\epsilon^{-4} queries to find an ϵ\epsilon stationary point. The lower bound is tight, and establishes that stochastic gradient descent is minimax optimal in this model. In a more restrictive model where the noisy gradient estimates satisfy a mean-squared smoothness property, we prove a lower bound of ϵ3\epsilon^{-3} queries, establishing the optimality of recently proposed variance reduction techniques.

View on arXiv
Comments on this paper