ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.00153
  4. Cited By
$μ$LO: Compute-Efficient Meta-Generalization of Learned Optimizers

μμμLO: Compute-Efficient Meta-Generalization of Learned Optimizers

31 May 2024
Benjamin Thérien
Charles-Étienne Joseph
Boris Knyazev
Edouard Oyallon
Irina Rish
Eugene Belilovsky
    AI4CE
ArXivPDFHTML

Papers citing "$μ$LO: Compute-Efficient Meta-Generalization of Learned Optimizers"

3 / 3 papers shown
Title
Convergence of Adam Under Relaxed Assumptions
Convergence of Adam Under Relaxed Assumptions
Haochuan Li
Alexander Rakhlin
Ali Jadbabaie
29
53
0
27 Apr 2023
A Closer Look at Learned Optimization: Stability, Robustness, and
  Inductive Biases
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases
James Harrison
Luke Metz
Jascha Narain Sohl-Dickstein
42
22
0
22 Sep 2022
A Simple Guard for Learned Optimizers
A Simple Guard for Learned Optimizers
Isabeau Prémont-Schwarz
Jaroslav Vítkru
Jan Feyereisl
36
7
0
28 Jan 2022
1