ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.05449
24
0

Investigating Alternatives to the Root Mean Square for Adaptive Gradient Methods

10 June 2021
Brett Daley
Chris Amato
    ODL
ArXivPDFHTML
Abstract

Adam is an adaptive gradient method that has experienced widespread adoption due to its fast and reliable training performance. Recent approaches have not offered significant improvement over Adam, often because they do not innovate upon one of its core features: normalization by the root mean square (RMS) of recent gradients. However, as noted by Kingma and Ba (2015), any number of LpL^pLp normalizations are possible, with the RMS corresponding to the specific case of p=2p=2p=2. In our work, we theoretically and empirically characterize the influence of different LpL^pLp norms on adaptive gradient methods for the first time. We show mathematically how the choice of ppp influences the size of the steps taken, while leaving other desirable properties unaffected. We evaluate Adam with various LpL^pLp norms on a suite of deep learning benchmarks, and find that p>2p > 2p>2 consistently leads to improved learning speed and final performance. The choices of p=3p=3p=3 or p=6p=6p=6 also match or outperform state-of-the-art methods in all of our experiments.

View on arXiv
Comments on this paper