Learning Operators through Coefficient Mappings in Fixed Basis Spaces
Operator learning has emerged as a promising paradigm for approximating solution operators of partial differential equations (PDEs). However, conventional approaches typically rely on pointwise function discretizations, which often suffer from the curse of dimensionality, mesh dependence, and prohibitive training costs in high-resolution settings. To address these challenges, we propose the Fixed-Basis Coefficient to Coefficient Operator Network (FB-C2CNet), a novel framework that learns the operator mapping within the coefficient spaces induced by prescribed, fixed basis functions. Unlike existing methods that learn basis functions dynamically or rely on extensive sensor grids, FB-C2CNet encodes input functions onto a fixed set of basis functions (such as random features or finite element bases) and employs a neural network to predict the expansion coefficients of the solution. By decoupling basis selection from network training, this formulation significantly reduces the dimensionality of the input-output spaces and the number of trainable parameters. We further introduce metrics such as effective rank to analyze how the spectral properties of the coefficient space influence generalization performance. Extensive numerical experiments across a wide spectrum of benchmarks -- including linear, nonlinear, and high-dimensional problems -- demonstrate that FB-C2CNet achieves competitive predictive accuracy while reducing training time by orders of magnitude compared to conventional neural operators.
View on arXiv