An Approximation Theory for Metric Space-Valued Functions With A View
Towards Deep Learning
Motivated by the developing mathematics of deep learning, we build universal functions approximators of continuous maps between arbitrary Polish metric spaces and using elementary functions between Euclidean spaces as building blocks. Earlier results assume that the target space is a topological vector space. We overcome this limitation by ``randomization'': our approximators output discrete probability measures over . When and are Polish without additional structure, we prove very general qualitative guarantees; when they have suitable combinatorial structure, we prove quantitative guarantees for H\"{o}lder-like maps, including maps between finite graphs, solution operators to rough differential equations between certain Carnot groups, and continuous non-linear operators between Banach spaces arising in inverse problems. In particular, we show that the required number of Dirac measures is determined by the combinatorial structure of and . For barycentric , including Banach spaces, -trees, Hadamard manifolds, or Wasserstein spaces on Polish metric spaces, our approximators reduce to -valued functions. When the Euclidean approximators are neural networks, our constructions generalize transformer networks, providing a new probabilistic viewpoint of geometric deep learning.
View on arXiv