ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.07052
  4. Cited By
Orthogonalising gradients to speed up neural network optimisation

Orthogonalising gradients to speed up neural network optimisation

14 February 2022
Mark Tuddenham
Adam Prugel-Bennett
Jonathan Hare
    ODL
ArXiv (abs)PDFHTML

Papers citing "Orthogonalising gradients to speed up neural network optimisation"

3 / 3 papers shown
Title
Generalized Gradient Norm Clipping & Non-Euclidean $(L_0,L_1)$-Smoothness
Generalized Gradient Norm Clipping & Non-Euclidean (L0,L1)(L_0,L_1)(L0​,L1​)-Smoothness
Thomas Pethick
Wanyun Xie
Mete Erdogan
Kimon Antonakopoulos
Tony Silveti-Falls
Volkan Cevher
41
0
0
02 Jun 2025
SUMO: Subspace-Aware Moment-Orthogonalization for Accelerating Memory-Efficient LLM Training
SUMO: Subspace-Aware Moment-Orthogonalization for Accelerating Memory-Efficient LLM Training
Yehonathan Refael
Guy Smorodinsky
Tom Tirer
Ofir Lindenbaum
25
0
0
30 May 2025
Understanding Gradient Orthogonalization for Deep Learning via Non-Euclidean Trust-Region Optimization
Understanding Gradient Orthogonalization for Deep Learning via Non-Euclidean Trust-Region Optimization
Dmitry Kovalev
132
5
0
16 Mar 2025
1