368
v1v2 (latest)

Learning Green's functions associated with time-dependent partial differential equations

Journal of machine learning research (JMLR), 2022
Abstract

Neural operators are a popular technique in scientific machine learning to learn a mathematical model of the behavior of unknown physical systems from data. Neural operators are especially useful to learn solution operators associated with partial differential equations (PDEs) from pairs of forcing functions and solutions when numerical solvers are not available or the underlying physics is poorly understood. In this work, we attempt to provide theoretical foundations to understand the amount of training data needed to learn time-dependent PDEs. Given input-output pairs from a parabolic PDE in any spatial dimension n1n\geq 1, we derive the first theoretically rigorous scheme for learning the associated solution operator, which takes the form of a convolution with a Green's function GG. Until now, rigorously learning Green's functions associated with time-dependent PDEs has been a major challenge in the field of scientific machine learning because GG may not be square-integrable when n>1n>1, and time-dependent PDEs have transient dynamics. By combining the hierarchical low-rank structure of GG together with randomized numerical linear algebra, we construct an approximant to GG that achieves a relative error of O(Γϵ1/2ϵ)\smash{\mathcal{O}(\Gamma_\epsilon^{-1/2}\epsilon)} in the L1L^1-norm with high probability by using at most O(ϵn+22log(1/ϵ))\smash{\mathcal{O}(\epsilon^{-\frac{n+2}{2}}\log(1/\epsilon))} input-output training pairs, where Γϵ\Gamma_\epsilon is a measure of the quality of the training dataset for learning GG, and ϵ>0\epsilon>0 is sufficiently small.

View on arXiv
Comments on this paper