93

Improving physics-informed neural network extrapolation via transfer learning and adaptive activation functions

International Conference on Artificial Neural Networks (ICANN), 2025
Athanasios Papastathopoulos-Katsaros
Alexandra Stavrianidi
Zhandong Liu
Main:6 Pages
16 Figures
Bibliography:2 Pages
8 Tables
Appendix:10 Pages
Abstract

Physics-Informed Neural Networks (PINNs) are deep learning models that incorporate the governing physical laws of a system into the learning process, making them well-suited for solving complex scientific and engineering problems. Recently, PINNs have gained widespread attention as a powerful framework for combining physical principles with data-driven modeling to improve prediction accuracy. Despite their successes, however, PINNs often exhibit poor extrapolation performance outside the training domain and are highly sensitive to the choice of activation functions (AFs). In this paper, we introduce a transfer learning (TL) method to improve the extrapolation capability of PINNs. Our approach applies transfer learning (TL) within an extended training domain, using only a small number of carefully selected collocation points. Additionally, we propose an adaptive AF that takes the form of a linear combination of standard AFs, which improves both the robustness and accuracy of the model. Through a series of experiments, we demonstrate that our method achieves an average of 40% reduction in relative L2 error and an average of 50% reduction in mean absolute error in the extrapolation domain, all without a significant increase in computational cost. The code is available atthis https URL.

View on arXiv
Comments on this paper