62
48

Empirical Risk Minimization for Stochastic Convex Optimization: O(1/n)O(1/n)- and O(1/n2)O(1/n^2)-type of Risk Bounds

Abstract

Although there exist plentiful theories of empirical risk minimization (ERM) for supervised learning, current theoretical understandings of ERM for a related problem---stochastic convex optimization (SCO), are limited. In this work, we strengthen the realm of ERM for SCO by exploiting smoothness and strong convexity conditions to improve the risk bounds. First, we establish an O~(d/n+F/n)\widetilde{O}(d/n + \sqrt{F_*/n}) risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where dd is the dimensionality of the problem, nn is the number of samples, and FF_* is the minimal risk. Thus, when FF_* is small we obtain an O~(d/n)\widetilde{O}(d/n) risk bound, which is analogous to the O~(1/n)\widetilde{O}(1/n) optimistic rate of ERM for supervised learning. Second, if the objective function is also λ\lambda-strongly convex, we prove an O~(d/n+κF/n)\widetilde{O}(d/n + \kappa F_*/n ) risk bound where κ\kappa is the condition number, and improve it to O(1/[λn2]+κF/n)O(1/[\lambda n^2] + \kappa F_*/n) when n=Ω~(κd)n=\widetilde{\Omega}(\kappa d). As a result, we obtain an O(κ/n2)O(\kappa/n^2) risk bound under the condition that nn is large and FF_* is small, which to the best of our knowledge, is the first O(1/n2)O(1/n^2)-type of risk bound of ERM. Third, we stress that the above results are established in a unified framework, which allows us to derive new risk bounds under weaker conditions, e.g., without convexity of the random function and Lipschitz continuity of the expected function. Finally, we demonstrate that to achieve an O(1/[λn2]+κF/n)O(1/[\lambda n^2] + \kappa F_*/n) risk bound for supervised learning, the Ω~(κd)\widetilde{\Omega}(\kappa d) requirement on nn can be replaced with Ω(κ2)\Omega(\kappa^2), which is dimensionality-independent.

View on arXiv
Comments on this paper