Complexity Lower Bounds of Adaptive Gradient Algorithms for Non-convex Stochastic Optimization under Relaxed Smoothness
Recent results in non-convex stochastic optimization demonstrate the convergence of popular adaptive algorithms (e.g., AdaGrad) under the -smoothness condition, but the rate of convergence is a higher-order polynomial in terms of problem parameters like the smoothness constants. The complexity guaranteed by such algorithms to find an -stationary point may be significantly larger than the optimal complexity of achieved by SGD in the -smooth setting, where is the initial optimality gap, is the variance of stochastic gradient. However, it is currently not known whether these higher-order dependencies can be tightened. To answer this question, we investigate complexity lower bounds for several adaptive optimization algorithms in the -smooth setting, with a focus on the dependence in terms of problem parameters . We provide complexity bounds for three variations of AdaGrad, which show at least a quadratic dependence on problem parameters . Notably, we show that the decorrelated variant of AdaGrad-Norm requires at least stochastic gradient queries to find an -stationary point. We also provide a lower bound for SGD with a broad class of adaptive stepsizes. Our results show that, for certain adaptive algorithms, the -smooth setting is fundamentally more difficult than the standard smooth setting, in terms of the initial optimality gap and the smoothness constants.
View on arXiv