ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.00828
24
0

Multi-Lattice Sampling of Quantum Field Theories via Neural Operator-based Flows

1 January 2024
Bálint Máté
Franccois Fleuret
    AI4CE
ArXivPDFHTML
Abstract

We consider the problem of sampling discrete field configurations ϕ\phiϕ from the Boltzmann distribution [dϕ]Z−1e−S[ϕ][d\phi] Z^{-1} e^{-S[\phi]}[dϕ]Z−1e−S[ϕ], where SSS is the lattice-discretization of the continuous Euclidean action S\mathcal SS of some quantum field theory. Since such densities arise as the approximation of the underlying functional density [Dϕ(x)]Z−1e−S[ϕ(x)][\mathcal D\phi(x)] \mathcal Z^{-1} e^{-\mathcal S[\phi(x)]}[Dϕ(x)]Z−1e−S[ϕ(x)], we frame the task as an instance of operator learning. In particular, we propose to approximate a time-dependent operator Vt\mathcal V_tVt​ whose time integral provides a mapping between the functional distributions of the free theory [Dϕ(x)]Z0−1e−S0[ϕ(x)][\mathcal D\phi(x)] \mathcal Z_0^{-1} e^{-\mathcal S_{0}[\phi(x)]}[Dϕ(x)]Z0−1​e−S0​[ϕ(x)] and of the target theory [Dϕ(x)]Z−1e−S[ϕ(x)][\mathcal D\phi(x)]\mathcal Z^{-1}e^{-\mathcal S[\phi(x)]}[Dϕ(x)]Z−1e−S[ϕ(x)]. Whenever a particular lattice is chosen, the operator Vt\mathcal V_tVt​ can be discretized to a finite dimensional, time-dependent vector field VtV_tVt​ which in turn induces a continuous normalizing flow between finite dimensional distributions over the chosen lattice. This flow can then be trained to be a diffeormorphism between the discretized free and target theories [dϕ]Z0−1e−S0[ϕ][d\phi] Z_0^{-1} e^{-S_{0}[\phi]}[dϕ]Z0−1​e−S0​[ϕ], [dϕ]Z−1e−S[ϕ][d\phi] Z^{-1}e^{-S[\phi]}[dϕ]Z−1e−S[ϕ]. We run experiments on the ϕ4\phi^4ϕ4-theory to explore to what extent such operator-based flow architectures generalize to lattice sizes they were not trained on and show that pretraining on smaller lattices can lead to speedup over training only a target lattice size.

View on arXiv
Comments on this paper