New universal operator approximation theorem for encoder-decoder architectures (Preprint)

Motivated by the rapidly growing field of mathematics for operator approximation with neural networks, we present a novel universal operator approximation theorem for a broad class of encoder-decoder architectures. In this study, we focus on approximating continuous operators in , where and are infinite-dimensional normed or metric spaces, and we consider uniform convergence on compact subsets of . Unlike standard results in the operator learning literature, we investigate the case where the approximating operator sequence can be chosen independently of the compact sets. Taking a topological perspective, we analyze different types of operator approximation and show that compact-set-independent approximation is a strictly stronger property in most relevant operator learning frameworks. To establish our results, we introduce a new approximation property tailored to encoder-decoder architectures, which enables us to prove a universal operator approximation theorem ensuring uniform convergence on every compact subset. This result unifies and extends existing universal operator approximation theorems for various encoder-decoder architectures, including classical DeepONets, BasisONets, special cases of MIONets, architectures based on frames and other related approaches.
View on arXiv@article{gödeke2025_2503.24092, title={ New universal operator approximation theorem for encoder-decoder architectures (Preprint) }, author={ Janek Gödeke and Pascal Fernsel }, journal={arXiv preprint arXiv:2503.24092}, year={ 2025 } }