38
1

Adaptive approximation of monotone functions

Abstract

We study the classical problem of approximating a non-decreasing function f:XYf: \mathcal{X} \to \mathcal{Y} in Lp(μ)L^p(\mu) norm by sequentially querying its values, for known compact real intervals X\mathcal{X}, Y\mathcal{Y} and a known probability measure μ\mu on \cX\cX. For any function~ff we characterize the minimum number of evaluations of ff that algorithms need to guarantee an approximation f^\hat{f} with an Lp(μ)L^p(\mu) error below ϵ\epsilon after stopping. Unlike worst-case results that hold uniformly over all ff, our complexity measure is dependent on each specific function ff. To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function ff, up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the Lp(μ)L^p(\mu) error of GreedyBox decreases much faster for piecewise-C2C^2 functions than predicted by the algorithm (without any knowledge on the smoothness of ff). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.

View on arXiv
Comments on this paper