29
0

Uncertainty Quantification for Physics-Informed Neural Networks with Extended Fiducial Inference

Main:9 Pages
15 Figures
Bibliography:3 Pages
20 Tables
Appendix:21 Pages
Abstract

Uncertainty quantification (UQ) in scientific machine learning is increasingly critical as neural networks are widely adopted to tackle complex problems across diverse scientific disciplines. For physics-informed neural networks (PINNs), a prominent model in scientific machine learning, uncertainty is typically quantified using Bayesian or dropout methods. However, both approaches suffer from a fundamental limitation: the prior distribution or dropout rate required to construct honest confidence sets cannot be determined without additional information. In this paper, we propose a novel method within the framework of extended fiducial inference (EFI) to provide rigorous uncertainty quantification for PINNs. The proposed method leverages a narrow-neck hyper-network to learn the parameters of the PINN and quantify their uncertainty based on imputed random errors in the observations. This approach overcomes the limitations of Bayesian and dropout methods, enabling the construction of honest confidence sets based solely on observed data. This advancement represents a significant breakthrough for PINNs, greatly enhancing their reliability, interpretability, and applicability to real-world scientific and engineering challenges. Moreover, it establishes a new theoretical framework for EFI, extending its application to large-scale models, eliminating the need for sparse hyper-networks, and significantly improving the automaticity and robustness of statistical inference.

View on arXiv
@article{shih2025_2505.19136,
  title={ Uncertainty Quantification for Physics-Informed Neural Networks with Extended Fiducial Inference },
  author={ Frank Shih and Zhenghao Jiang and Faming Liang },
  journal={arXiv preprint arXiv:2505.19136},
  year={ 2025 }
}
Comments on this paper