Adaptive approximation of monotone functions

We study the classical problem of approximating a non-decreasing function in norm by sequentially querying its values, for known compact real intervals , and a known probability measure on . For any function~ we characterize the minimum number of evaluations of that algorithms need to guarantee an approximation with an error below after stopping. Unlike worst-case results that hold uniformly over all , our complexity measure is dependent on each specific function . To address this problem, we introduce GreedyBox, a generalization of an algorithm originally proposed by Novak (1992) for numerical integration. We prove that GreedyBox achieves an optimal sample complexity for any function , up to logarithmic factors. Additionally, we uncover results regarding piecewise-smooth functions. Perhaps as expected, the error of GreedyBox decreases much faster for piecewise- functions than predicted by the algorithm (without any knowledge on the smoothness of ). A simple modification even achieves optimal minimax approximation rates for such functions, which we compute explicitly. In particular, our findings highlight multiple performance gaps between adaptive and non-adaptive algorithms, smooth and piecewise-smooth functions, as well as monotone or non-monotone functions. Finally, we provide numerical experiments to support our theoretical results.
View on arXiv