189

Machine-Learning Accelerated Calculations of Reduced Density Matrices

Main:10 Pages
11 Figures
7 Tables
Appendix:32 Pages
Abstract

nn-particle reduced density matrices (nn-RDMs) play a central role in understanding correlated phases of matter. Yet the calculation of nn-RDMs is often computationally inefficient for strongly-correlated states, particularly when the system sizes are large. In this work, we propose to use neural network (NN) architectures to accelerate the calculation of, and even predict, the nn-RDMs for large-size systems. The underlying intuition is that nn-RDMs are often smooth functions over the Brillouin zone (BZ) (certainly true for gapped states) and are thus interpolable, allowing NNs trained on small-size nn-RDMs to predict large-size ones. Building on this intuition, we devise two NNs: (i) a self-attention NN that maps random RDMs to physical ones, and (ii) a Sinusoidal Representation Network (SIREN) that directly maps momentum-space coordinates to RDM values. We test the NNs in three 2D models: the pair-pair correlation functions of the Richardson model of superconductivity, the translationally-invariant 1-RDM in a four-band model with short-range repulsion, and the translation-breaking 1-RDM in the half-filled Hubbard model. We find that a SIREN trained on a 6×66\times 6 momentum mesh can predict the 18×1818\times 18 pair-pair correlation function with a relative accuracy of 0.8390.839. The NNs trained on 6×68×86\times 6 \sim 8\times 8 meshes can provide high-quality initial guesses for 50×5050\times 50 translation-invariant Hartree-Fock (HF) and 30×3030\times 30 fully translation-breaking-allowed HF, reducing the number of iterations required for convergence by up to 91.63%91.63\% and 92.78%92.78\%, respectively, compared to random initializations. Our results illustrate the potential of using NN-based methods for interpolable nn-RDMs, which might open a new avenue for future research on strongly correlated phases.

View on arXiv
Comments on this paper