B-PL-PINN: Stabilizing PINN Training with Bayesian Pseudo Labeling

Training physics-informed neural networks (PINNs) for forward problems often suffers from severe convergence issues, hindering the propagation of information from regions where the desired solution is well-defined. Haitsiukevich and Ilin (2023) proposed an ensemble approach that extends the active training domain of each PINN based on i) ensemble consensus and ii) vicinity to (pseudo-)labeled points, thus ensuring that the information from the initial condition successfully propagates to the interior of the computational domain.In this work, we suggest replacing the ensemble by a Bayesian PINN, and consensus by an evaluation of the PINN's posterior variance. Our experiments show that this mathematically principled approach outperforms the ensemble on a set of benchmark problems and is competitive with PINN ensembles trained with combinations of Adam and LBFGS.
View on arXiv@article{innerebner2025_2507.01714, title={ B-PL-PINN: Stabilizing PINN Training with Bayesian Pseudo Labeling }, author={ Kevin Innerebner and Franz M. Rohrhofer and Bernhard C. Geiger }, journal={arXiv preprint arXiv:2507.01714}, year={ 2025 } }