ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01660
51
1

Non-convergence to the optimal risk for Adam and stochastic gradient descent optimization in the training of deep neural networks

3 March 2025
Thang Do
Arnulf Jentzen
Adrian Riekert
ArXivPDFHTML
Abstract

Despite the omnipresent use of stochastic gradient descent (SGD) optimization methods in the training of deep neural networks (DNNs), it remains, in basically all practically relevant scenarios, a fundamental open problem to provide a rigorous theoretical explanation for the success (and the limitations) of SGD optimization methods in deep learning. In particular, it remains an open question to prove or disprove convergence of the true risk of SGD optimization methods to the optimal true risk value in the training of DNNs. In one of the main results of this work we reveal for a general class of activations, loss functions, random initializations, and SGD optimization methods (including, for example, standard SGD, momentum SGD, Nesterov accelerated SGD, Adagrad, RMSprop, Adadelta, Adam, Adamax, Nadam, Nadamax, and AMSGrad) that in the training of any arbitrary fully-connected feedforward DNN it does not hold that the true risk of the considered optimizer converges in probability to the optimal true risk value. Nonetheless, the true risk of the considered SGD optimization method may very well converge to a strictly suboptimal true risk value.

View on arXiv
@article{do2025_2503.01660,
  title={ Non-convergence to the optimal risk for Adam and stochastic gradient descent optimization in the training of deep neural networks },
  author={ Thang Do and Arnulf Jentzen and Adrian Riekert },
  journal={arXiv preprint arXiv:2503.01660},
  year={ 2025 }
}
Comments on this paper