ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.03351
220
67
v1v2 (latest)

The universal approximation theorem for complex-valued neural networks

Applied and Computational Harmonic Analysis (ACHA), 2020
6 December 2020
F. Voigtlaender
ArXiv (abs)PDFHTML
Abstract

We generalize the classical universal approximation theorem for neural networks to the case of complex-valued neural networks. Precisely, we consider feedforward networks with a complex activation function σ:C→C\sigma : \mathbb{C} \to \mathbb{C}σ:C→C in which each neuron performs the operation CN→C,z↦σ(b+wTz)\mathbb{C}^N \to \mathbb{C}, z \mapsto \sigma(b + w^T z)CN→C,z↦σ(b+wTz) with weights w∈CNw \in \mathbb{C}^Nw∈CN and a bias b∈Cb \in \mathbb{C}b∈C, and with σ\sigmaσ applied componentwise. We completely characterize those activation functions σ\sigmaσ for which the associated complex networks have the universal approximation property, meaning that they can uniformly approximate any continuous function on any compact subset of Cd\mathbb{C}^dCd arbitrarily well. Unlike the classical case of real networks, the set of "good activation functions" which give rise to networks with the universal approximation property differs significantly depending on whether one considers deep networks or shallow networks: For deep networks with at least two hidden layers, the universal approximation property holds as long as σ\sigmaσ is neither a polynomial, a holomorphic function, or an antiholomorphic function. Shallow networks, on the other hand, are universal if and only if the real part or the imaginary part of σ\sigmaσ is not a polyharmonic function.

View on arXiv
Comments on this paper