Surprisingly High Redundancy in Electronic Structure Data
Accurate prediction of electronic structure underpins advances in chemistry, materials science, and condensed matter physics. In recent years, Machine Learning (ML) has enabled the development of powerful surrogate models that can enable the prediction of the ground state electron density and related properties at a fraction of the computational cost of conventional first principles simulations. Such ML models typically rely on massive datasets generated through expensive Kohn-Sham Density Functional Theory calculations. A key reason for relying on such large datasets is the lack of prior knowledge about which portions of the data are essential, and which are redundant. This study reveals significant redundancies in electronic structure datasets across various material systems, including molecules, simple metals, and chemically complex alloys -- challenging the notion that extensive datasets are essential for accurate ML-based electronic structure predictions. We demonstrate that even random pruning can substantially reduce dataset size with minimal loss in predictive accuracy. Furthermore, a state-of-the-art coverage-based pruning strategy that selects data across all learning difficulties, retains chemical accuracy and model generalizability using up to 100-fold less data, while reducing training time by threefold or greater. By contrast, widely used importance-based pruning methods, which eliminate easy-to-learn data, can catastrophically fail at higher pruning factors due to significant reduction in data coverage. This heretofore unexplored high redundancy in electronic structure data holds the potential to identify a minimal, essential dataset representative of each material class.
View on arXiv