Dimension-independent learning rates for high-dimensional classification
problems
Main:23 Pages
1 Figures
Bibliography:3 Pages
Abstract
We study the problem of approximating and estimating classification functions that have their decision boundary in the space. Functions of type arise naturally as solutions of regularized neural network learning problems and neural networks can approximate these functions without the curse of dimensionality. We modify existing results to show that every function can be approximated by a neural network with bounded weights. Thereafter, we prove the existence of a neural network with bounded weights approximating a classification function. And we leverage these bounds to quantify the estimation rates. Finally, we present a numerical study that analyzes the effect of different regularity conditions on the decision boundaries.
View on arXivComments on this paper
