49
1

Pre-training Graph Neural Networks with Structural Fingerprints for Materials Discovery

Abstract

In recent years, pre-trained graph neural networks (GNNs) have been developed as general models which can be effectively fine-tuned for various potential downstream tasks in materials science, and have shown significant improvements in accuracy and data efficiency. The most widely used pre-training methods currently involve either supervised training to fit a general force field or self-supervised training by denoising atomic structures equilibrium. Both methods require datasets generated from quantum mechanical calculations, which quickly become intractable when scaling to larger datasets. Here we propose a novel pre-training objective which instead uses cheaply-computed structural fingerprints as targets while maintaining comparable performance across a range of different structural descriptors. Our experiments show this approach can act as a general strategy for pre-training GNNs with application towards large scale foundational models for atomistic data.

View on arXiv
@article{jia2025_2503.01227,
  title={ Pre-training Graph Neural Networks with Structural Fingerprints for Materials Discovery },
  author={ Shuyi Jia and Shitij Govil and Manav Ramprasad and Victor Fung },
  journal={arXiv preprint arXiv:2503.01227},
  year={ 2025 }
}
Comments on this paper