59
6

Vocabulary for Universal Approximation: A Linguistic Perspective of Mapping Compositions

Abstract

In recent years, deep learning-based sequence modelings, such as language models, have received much attention and success, which pushes researchers to explore the possibility of transforming non-sequential problems into a sequential form. Following this thought, deep neural networks can be represented as composite functions of a sequence of mappings, linear or nonlinear, where each composition can be viewed as a \emph{word}. However, the weights of linear mappings are undetermined and hence require an infinite number of words. In this article, we investigate the finite case and constructively prove the existence of a finite \emph{vocabulary} V={ϕi:RdRdi=1,...,n}V=\{\phi_i: \mathbb{R}^d \to \mathbb{R}^d | i=1,...,n\} with n=O(d2)n=O(d^2) for the universal approximation. That is, for any continuous mapping f:RdRdf: \mathbb{R}^d \to \mathbb{R}^d, compact domain Ω\Omega and ε>0\varepsilon>0, there is a sequence of mappings ϕi1,...,ϕimV,mZ+\phi_{i_1}, ..., \phi_{i_m} \in V, m \in \mathbb{Z}_+, such that the composition ϕim...ϕi1\phi_{i_m} \circ ... \circ \phi_{i_1} approximates ff on Ω\Omega with an error less than ε\varepsilon. Our results demonstrate an unusual approximation power of mapping compositions and motivate a novel compositional model for regular languages.

View on arXiv
Comments on this paper