ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.13305
16
2

Neural Network Approximation of Continuous Functions in High Dimensions with Applications to Inverse Problems

28 August 2022
Santhosh Karnik
Rongrong Wang
M. Iwen
ArXivPDFHTML
Abstract

The remarkable successes of neural networks in a huge variety of inverse problems have fueled their adoption in disciplines ranging from medical imaging to seismic analysis over the past decade. However, the high dimensionality of such inverse problems has simultaneously left current theory, which predicts that networks should scale exponentially in the dimension of the problem, unable to explain why the seemingly small networks used in these settings work as well as they do in practice. To reduce this gap between theory and practice, we provide a general method for bounding the complexity required for a neural network to approximate a H\"older (or uniformly) continuous function defined on a high-dimensional set with a low-complexity structure. The approach is based on the observation that the existence of a Johnson-Lindenstrauss embedding A∈Rd×DA\in\mathbb{R}^{d\times D}A∈Rd×D of a given high-dimensional set S⊂RDS\subset\mathbb{R}^DS⊂RD into a low dimensional cube [−M,M]d[-M,M]^d[−M,M]d implies that for any H\"older (or uniformly) continuous function f:S→Rpf:S\to\mathbb{R}^pf:S→Rp, there exists a H\"older (or uniformly) continuous function g:[−M,M]d→Rpg:[-M,M]^d\to\mathbb{R}^pg:[−M,M]d→Rp such that g(Ax)=f(x)g(Ax)=f(x)g(Ax)=f(x) for all x∈Sx\in Sx∈S. Hence, if one has a neural network which approximates g:[−M,M]d→Rpg:[-M,M]^d\to\mathbb{R}^pg:[−M,M]d→Rp, then a layer can be added that implements the JL embedding AAA to obtain a neural network that approximates f:S→Rpf:S\to\mathbb{R}^pf:S→Rp. By pairing JL embedding results along with results on approximation of H\"older (or uniformly) continuous functions by neural networks, one then obtains results which bound the complexity required for a neural network to approximate H\"older (or uniformly) continuous functions on high dimensional sets. The end result is a general theoretical framework which can then be used to better explain the observed empirical successes of smaller networks in a wider variety of inverse problems than current theory allows.

View on arXiv
Comments on this paper