Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev
Spaces
Journal of machine learning research (JMLR), 2022
Abstract
We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev space on a bounded domain , where the error is measured in . This problem is important for studying the application of neural networks in scientific computing and has previously been solved only in the case . Our contribution is to provide a solution for all and . Our results show that deep ReLU networks significantly outperform classical methods of approximation, but that this comes at the cost of parameters which are not encodable.
View on arXivComments on this paper
