ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.06705
  4. Cited By
From Gradient Flow on Population Loss to Learning with Stochastic
  Gradient Descent

From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent

Neural Information Processing Systems (NeurIPS), 2022
13 October 2022
Satyen Kale
Jason D. Lee
Chris De Sa
Ayush Sekhari
Karthik Sridharan
ArXiv (abs)PDFHTMLGithub

Papers citing "From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent"

4 / 4 papers shown
Can Looped Transformers Learn to Implement Multi-step Gradient Descent
  for In-context Learning?
Can Looped Transformers Learn to Implement Multi-step Gradient Descent for In-context Learning?International Conference on Machine Learning (ICML), 2024
Khashayar Gatmiry
Nikunj Saunshi
Sashank J. Reddi
Stefanie Jegelka
Sanjiv Kumar
360
42
0
10 Oct 2024
Convergence of continuous-time stochastic gradient descent with applications to deep neural networks
Convergence of continuous-time stochastic gradient descent with applications to deep neural networks
Gabor Lugosi
Eulalia Nualart
364
3
0
11 Sep 2024
Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
  Minimization Framework
Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk Minimization Framework
Sina Baharlouei
Meisam Razaviyayn
FaMLOOD
311
1
0
20 Sep 2023
Convergence of stochastic gradient descent under a local Lojasiewicz
  condition for deep neural networks
Convergence of stochastic gradient descent under a local Lojasiewicz condition for deep neural networks
Jing An
Jianfeng Lu
340
6
0
18 Apr 2023
1
Page 1 of 1