ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06558
17
0

Rapid training of Hamiltonian graph networks without gradient descent

6 June 2025
Atamert Rahma
Chinmay Datar
Ana Cukarska
Felix Dietrich
    AI4CE
ArXiv (abs)PDFHTML
Main:9 Pages
13 Figures
Bibliography:6 Pages
22 Tables
Appendix:10 Pages
Abstract

Learning dynamical systems that respect physical symmetries and constraints remains a fundamental challenge in data-driven modeling. Integrating physical laws with graph neural networks facilitates principled modeling of complex N-body dynamics and yields accurate and permutation-invariant models. However, training graph neural networks with iterative, gradient-based optimization algorithms (e.g., Adam, RMSProp, LBFGS) often leads to slow training, especially for large, complex systems. In comparison to 15 different optimizers, we demonstrate that Hamiltonian Graph Networks (HGN) can be trained up to 600x faster--but with comparable accuracy--by replacing iterative optimization with random feature-based parameter construction. We show robust performance in diverse simulations, including N-body mass-spring systems in up to 3 dimensions with different geometries, while retaining essential physical invariances with respect to permutation, rotation, and translation. We reveal that even when trained on minimal 8-node systems, the model can generalize in a zero-shot manner to systems as large as 4096 nodes without retraining. Our work challenges the dominance of iterative gradient-descent-based optimization algorithms for training neural network models for physical systems.

View on arXiv
@article{rahma2025_2506.06558,
  title={ Rapid training of Hamiltonian graph networks without gradient descent },
  author={ Atamert Rahma and Chinmay Datar and Ana Cukarska and Felix Dietrich },
  journal={arXiv preprint arXiv:2506.06558},
  year={ 2025 }
}
Comments on this paper