284
v1v2 (latest)

Dual Natural Gradient Descent for Scalable Training of Physics-Informed Neural Networks

Main:14 Pages
5 Figures
Bibliography:3 Pages
15 Tables
Appendix:8 Pages
Abstract

Natural-gradient methods markedly accelerate the training of Physics-Informed Neural Networks (PINNs), yet their Gauss--Newton update must be solved in the parameter space, incurring a prohibitive O(n3)O(n^3) time complexity, where nn is the number of network trainable weights. We show that exactly the same step can instead be formulated in a generally smaller residual space of size m=γNγdγm = \sum_{\gamma} N_{\gamma} d_{\gamma}, where each residual class γ\gamma (e.g. PDE interior, boundary, initial data) contributes NγN_{\gamma} collocation points of output dimension dγd_{\gamma}.Building on this insight, we introduce \textit{Dual Natural Gradient Descent} (D-NGD). D-NGD computes the Gauss--Newton step in residual space, augments it with a geodesic-acceleration correction at negligible extra cost, and provides both a dense direct solver for modest mm and a Nystrom-preconditioned conjugate-gradient solver for larger mm.Experimentally, D-NGD scales second-order PINN optimization to networks with up to 12.8 million parameters, delivers one- to three-order-of-magnitude lower final error L2L^2 than first-order methods (Adam, SGD) and quasi-Newton methods, and -- crucially -- enables natural-gradient training of PINNs at this scale on a single GPU.

View on arXiv
Comments on this paper