ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22085
  4. Cited By
PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning

PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning

28 May 2025
Arnulf Jentzen
Julian Kranz
Adrian Riekert
    ODL
ArXiv (abs)PDFHTML

Papers citing "PADAM: Parallel averaged Adam reduces the error for stochastic optimization in scientific machine learning"

1 / 1 papers shown
Title
Averaged Adam accelerates stochastic optimization in the training of deep neural network approximations for partial differential equation and optimal control problems
Averaged Adam accelerates stochastic optimization in the training of deep neural network approximations for partial differential equation and optimal control problems
Steffen Dereich
Arnulf Jentzen
Adrian Riekert
AI4CE
38
1
0
10 Jan 2025
1