29
0

Explain Like I'm Five: Using LLMs to Improve PDE Surrogate Models with Text

Cooper Lorsung
Amir Barati Farimani
Abstract

Solving Partial Differential Equations (PDEs) is ubiquitous in science and engineering. Computational complexity and difficulty in writing numerical solvers has motivated the development of data-driven machine learning techniques to generate solutions quickly. The recent rise in popularity of Large Language Models (LLMs) has enabled easy integration of text in multimodal machine learning models, allowing easy integration of additional system information such as boundary conditions and governing equations through text. In this work, we explore using pretrained LLMs to integrate various amounts of known system information into PDE learning. Using FactFormer as our testing backbone, we add a multimodal block to fuse numerical and textual information. We compare sentence-level embeddings, word-level embeddings, and a standard tokenizer across 2D Heat, Burgers, Navier-Stokes, and Shallow-Water data sets. These challenging benchmarks show that pretrained LLMs are able to utilize text descriptions of system information and enable accurate prediction using only initial conditions.

View on arXiv
@article{lorsung2025_2410.01137,
  title={ Explain Like I'm Five: Using LLMs to Improve PDE Surrogate Models with Text },
  author={ Cooper Lorsung and Amir Barati Farimani },
  journal={arXiv preprint arXiv:2410.01137},
  year={ 2025 }
}
Comments on this paper