ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.05621
  4. Cited By
Normalized Gradients for All

Normalized Gradients for All

10 August 2023
Francesco Orabona
ArXiv (abs)PDFHTMLGithub

Papers citing "Normalized Gradients for All"

11 / 11 papers shown
Non-Euclidean Broximal Point Method: A Blueprint for Geometry-Aware Optimization
Non-Euclidean Broximal Point Method: A Blueprint for Geometry-Aware Optimization
Kaja Gruntkowska
Peter Richtárik
241
3
0
01 Oct 2025
Stochastic Adaptive Gradient Descent Without Descent
Stochastic Adaptive Gradient Descent Without Descent
Jean-François Aujol
Jérémie Bigot
Camille Castera
ODL
302
1
0
18 Sep 2025
AdaGrad Meets Muon: Adaptive Stepsizes for Orthogonal Updates
AdaGrad Meets Muon: Adaptive Stepsizes for Orthogonal Updates
Minxin Zhang
Yuxuan Liu
Hayden Schaeffer
280
9
0
03 Sep 2025
Nesterov Finds GRAAL: Optimal and Adaptive Gradient Method for Convex Optimization
Nesterov Finds GRAAL: Optimal and Adaptive Gradient Method for Convex Optimization
Ekaterina Borodich
D. Kovalev
220
5
0
13 Jul 2025
Glocal Smoothness: Line Search can really help!
Glocal Smoothness: Line Search can really help!
Curtis Fox
Aaron Mishkin
Sharan Vaswani
Mark Schmidt
358
5
0
14 Jun 2025
Generalized Gradient Norm Clipping & Non-Euclidean $(L_0,L_1)$-Smoothness
Generalized Gradient Norm Clipping & Non-Euclidean (L0,L1)(L_0,L_1)(L0​,L1​)-Smoothness
Thomas Pethick
Wanyun Xie
Mete Erdogan
Kimon Antonakopoulos
Tony Silveti-Falls
Volkan Cevher
385
9
0
02 Jun 2025
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Aaron Mishkin
Ahmed Khaled
Yuanhao Wang
Aaron Defazio
Robert Mansel Gower
497
17
0
06 Mar 2024
Tuning-Free Stochastic Optimization
Tuning-Free Stochastic Optimization
Ahmed Khaled
Chi Jin
318
13
0
12 Feb 2024
Discounted Adaptive Online Learning: Towards Better Regularization
Discounted Adaptive Online Learning: Towards Better Regularization
Zhiyu Zhang
David Bombara
Heng Yang
337
13
0
05 Feb 2024
A simple uniformly optimal method without line search for convex
  optimization
A simple uniformly optimal method without line search for convex optimization
Tianjiao Li
Guanghui Lan
468
47
0
16 Oct 2023
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent
  Method
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent MethodNeural Information Processing Systems (NeurIPS), 2023
Ahmed Khaled
Konstantin Mishchenko
Chi Jin
ODL
479
43
0
25 May 2023
1
Page 1 of 1