680

Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev Spaces

Journal of machine learning research (JMLR), 2022
Abstract

We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev space Ws(Lq(Ω))W^s(L_q(\Omega)) on a bounded domain Ω\Omega, where the error is measured in Lp(Ω)L_p(\Omega). This problem is important for studying the application of neural networks in scientific computing and has previously been solved only in the case p=q=p=q=\infty. Our contribution is to provide a solution for all 1p,q1\leq p,q\leq \infty and s>0s > 0. Our results show that deep ReLU networks significantly outperform classical methods of approximation, but that this comes at the cost of parameters which are not encodable.

View on arXiv
Comments on this paper