ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.06093
74
25
v1v2v3v4v5 (latest)

Abstract Universal Approximation for Neural Networks

12 July 2020
Zi Wang
Aws Albarghouthi
Gautam Prakriya
ArXiv (abs)PDFHTML
Abstract

With growing concerns about the safety and robustness of neural networks, a number of researchers have successfully applied abstract interpretation with numerical domains to verify properties of neural networks. Why do numerical domains work for neural-network verification? We present a theoretical result that demonstrates the power of numerical domains, namely, the simple interval domain, for analysis of neural networks. Our main theorem, which we call the abstract universal approximation (AUA) theorem, generalizes the recent result by Baader et al. [2020] for ReLU networks to a rich class of neural networks. The classical universal approximation theorem says that, given function fff, for any desired precision, there is a neural network that can approximate fff. The AUA theorem states that for any function fff, there exists a neural network whose abstract interpretation is an arbitrarily close approximation of the collecting semantics of fff. Further, the network may be constructed using any well-behaved activation function---sigmoid, tanh, parametric ReLU, ELU, and more---making our result quite general. The implication of the AUA theorem is that there exist provably correct neural networks: Suppose, for instance, that there is an ideal robust image classifier represented as function fff. The AUA theorem tells us that there exists a neural network that approximates fff and for which we can automatically construct proofs of robustness using the interval abstract domain. Our work sheds light on the existence of provably correct neural networks, using arbitrary activation functions, and establishes intriguing connections between well-known theoretical properties of neural networks and abstract interpretation using numerical domains.

View on arXiv
Comments on this paper