Local Asymptotics for some Stochastic Optimization Problems: Optimality,
Constraint Identification, and Dual Averaging
We study local complexity measures for stochastic convex optimization problems, providing a local minimax theory analogous to that of H\'ajek and Le Cam for classical statistical problems, and providing efficient procedures based on Nesterov's dual averaging that (often) adaptively achieve optimal convergence guarantees. Our results strongly leverage the geometry of the optimization problem at hand, providing function-specific lower bounds and convergence results. We show how variants of dual averaging---a stochastic gradient-based procedure---guarantee finite time identification of constraints in optimization problems, while stochastic gradient procedures provably fail. Additionally, we highlight a gap between optimization problems with linear and nonlinear constraints: all of our stochastic-gradient-based procedures are suboptimal even for the simplest nonlinear constraints.
View on arXiv