7
0

Deep Generative Models: Complexity, Dimensionality, and Approximation

Abstract

Generative networks have shown remarkable success in learning complex data distributions, particularly in generating high-dimensional data from lower-dimensional inputs. While this capability is well-documented empirically, its theoretical underpinning remains unclear. One common theoretical explanation appeals to the widely accepted manifold hypothesis, which suggests that many real-world datasets, such as images and signals, often possess intrinsic low-dimensional geometric structures. Under this manifold hypothesis, it is widely believed that to approximate a distribution on a dd-dimensional Riemannian manifold, the latent dimension needs to be at least dd or d+1d+1. In this work, we show that this requirement on the latent dimension is not necessary by demonstrating that generative networks can approximate distributions on dd-dimensional Riemannian manifolds from inputs of any arbitrary dimension, even lower than dd, taking inspiration from the concept of space-filling curves. This approach, in turn, leads to a super-exponential complexity bound of the deep neural networks through expanded neurons. Our findings thus challenge the conventional belief on the relationship between input dimensionality and the ability of generative networks to model data distributions. This novel insight not only corroborates the practical effectiveness of generative networks in handling complex data structures, but also underscores a critical trade-off between approximation error, dimensionality, and model complexity.

View on arXiv
@article{wang2025_2504.00820,
  title={ Deep Generative Models: Complexity, Dimensionality, and Approximation },
  author={ Kevin Wang and Hongqian Niu and Yixin Wang and Didong Li },
  journal={arXiv preprint arXiv:2504.00820},
  year={ 2025 }
}
Comments on this paper