24
0

HMAE: Self-Supervised Few-Shot Learning for Quantum Spin Systems

Abstract

Quantum machine learning for spin and molecular systems faces critical challenges of scarce labeled data and computationally expensive simulations. To address these limitations, we introduce Hamiltonian-Masked Autoencoding (HMAE), a novel self-supervised framework that pre-trains transformers on unlabeled quantum Hamiltonians, enabling efficient few-shot transfer learning. Unlike random masking approaches, HMAE employs a physics-informed strategy based on quantum information theory to selectively mask Hamiltonian terms based on their physical significance. Experiments on 12,500 quantum Hamiltonians (60% real-world, 40% synthetic) demonstrate that HMAE achieves 85.3% ±\pm 1.5% accuracy in phase classification and 0.15 ±\pm 0.02 eV MAE in ground state energy prediction with merely 10 labeled examples - a statistically significant improvement (p < 0.01) over classical graph neural networks (78.1% ±\pm 2.1%) and quantum neural networks (76.8% ±\pm 2.3%). Our method's primary advantage is exceptional sample efficiency - reducing required labeled examples by 3-5x compared to baseline methods - though we emphasize that ground truth values for fine-tuning and evaluation still require exact diagonalization or tensor networks. We explicitly acknowledge that our current approach is limited to small quantum systems (specifically limited to 12 qubits during training, with limited extension to 16-20 qubits in testing) and that, while promising within this regime, this size restriction prevents immediate application to larger systems of practical interest in materials science and quantum chemistry.

View on arXiv
@article{shihab2025_2505.03140,
  title={ HMAE: Self-Supervised Few-Shot Learning for Quantum Spin Systems },
  author={ Ibne Farabi Shihab and Sanjeda Akter and Anuj Sharma },
  journal={arXiv preprint arXiv:2505.03140},
  year={ 2025 }
}
Comments on this paper