ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.12789
8
25

Learning Green's functions associated with time-dependent partial differential equations

27 April 2022
N. Boullé
Seick Kim
Tianyi Shi
Alex Townsend
    AI4CE
ArXivPDFHTML
Abstract

Neural operators are a popular technique in scientific machine learning to learn a mathematical model of the behavior of unknown physical systems from data. Neural operators are especially useful to learn solution operators associated with partial differential equations (PDEs) from pairs of forcing functions and solutions when numerical solvers are not available or the underlying physics is poorly understood. In this work, we attempt to provide theoretical foundations to understand the amount of training data needed to learn time-dependent PDEs. Given input-output pairs from a parabolic PDE in any spatial dimension n≥1n\geq 1n≥1, we derive the first theoretically rigorous scheme for learning the associated solution operator, which takes the form of a convolution with a Green's function GGG. Until now, rigorously learning Green's functions associated with time-dependent PDEs has been a major challenge in the field of scientific machine learning because GGG may not be square-integrable when n>1n>1n>1, and time-dependent PDEs have transient dynamics. By combining the hierarchical low-rank structure of GGG together with randomized numerical linear algebra, we construct an approximant to GGG that achieves a relative error of O(Γϵ−1/2ϵ)\smash{\mathcal{O}(\Gamma_\epsilon^{-1/2}\epsilon)}O(Γϵ−1/2​ϵ) in the L1L^1L1-norm with high probability by using at most O(ϵ−n+22log⁡(1/ϵ))\smash{\mathcal{O}(\epsilon^{-\frac{n+2}{2}}\log(1/\epsilon))}O(ϵ−2n+2​log(1/ϵ)) input-output training pairs, where Γϵ\Gamma_\epsilonΓϵ​ is a measure of the quality of the training dataset for learning GGG, and ϵ>0\epsilon>0ϵ>0 is sufficiently small.

View on arXiv
Comments on this paper