Fast rates for empirical risk minimization with cadlag losses with bounded sectional variation norm

Empirical risk minimization over sieves of the class of cadlag functions with bounded variation norm has a long history, starting with Total Variation Denoising (Rudin et al., 1992), and has been considered by several recent articles, in particular Fang et al. (2019) and van der Laan (2015). In this article, we show how a certain representation of cadlag functions with bounded sectional variation, also called Hardy-Krause variation, allows to bound the bracketing entropy of sieves of and therefore derive fast rates of convergence in nonparametric function estimation. Specifically, for any sequence that (slowly) diverges to , we show that we can construct an estimator with rate of convergence over , under some fairly general assumptions. Remarkably, the dimension only affects the rate in through the logarithmic factor, making this method especially appropriate for high dimensional problems. In particular, we show that in the case of nonparametric regression over sieves of cadlag functions with bounded sectional variation norm, this upper bound on the rate of convergence holds for least-squares estimators, under the random design, sub-exponential errors setting.
View on arXiv