Empirical Risk Minimization for Stochastic Convex Optimization: - and -type of Risk Bounds

Although there exist plentiful theories of empirical risk minimization (ERM) for supervised learning, current theoretical understandings of ERM for a related problem---stochastic convex optimization (SCO), are limited. In this work, we strengthen the realm of ERM for SCO by exploiting smoothness and strong convexity conditions to improve the risk bounds. First, we establish an risk bound when the random function is nonnegative, convex and smooth, and the expected function is Lipschitz continuous, where is the dimensionality of the problem, is the number of samples, and is the minimal risk. Thus, when is small we obtain an risk bound, which is analogous to the optimistic rate of ERM for supervised learning. Second, if the objective function is also -strongly convex, we prove an risk bound where is the condition number, and improve it to when . As a result, we obtain an risk bound under the condition that is large and is small, which to the best of our knowledge, is the first -type of risk bound of ERM. Third, we stress that the above results are established in a unified framework, which allows us to derive new risk bounds under weaker conditions, e.g., without convexity of the random function and Lipschitz continuity of the expected function. Finally, we demonstrate that to achieve an risk bound for supervised learning, the requirement on can be replaced with , which is dimensionality-independent.
View on arXiv