ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.02601
  4. Cited By
Aiming towards the minimizers: fast convergence of SGD for
  overparametrized problems

Aiming towards the minimizers: fast convergence of SGD for overparametrized problems

5 June 2023
Chaoyue Liu
Dmitriy Drusvyatskiy
M. Belkin
Damek Davis
Yi Ma
    ODL
ArXivPDFHTML

Papers citing "Aiming towards the minimizers: fast convergence of SGD for overparametrized problems"

13 / 13 papers shown
Title
A Novel Unified Parametric Assumption for Nonconvex Optimization
A Novel Unified Parametric Assumption for Nonconvex Optimization
Artem Riabinin
Ahmed Khaled
Peter Richtárik
62
0
0
17 Feb 2025
Loss Landscape Characterization of Neural Networks without
  Over-Parametrization
Loss Landscape Characterization of Neural Networks without Over-Parametrization
Rustem Islamov
Niccolò Ajroldi
Antonio Orvieto
Aurelien Lucchi
35
4
0
16 Oct 2024
Nesterov acceleration in benignly non-convex landscapes
Nesterov acceleration in benignly non-convex landscapes
Kanan Gupta
Stephan Wojtowytsch
36
2
0
10 Oct 2024
Hybrid Coordinate Descent for Efficient Neural Network Learning Using
  Line Search and Gradient Descent
Hybrid Coordinate Descent for Efficient Neural Network Learning Using Line Search and Gradient Descent
Yen-Che Hsiao
Abhishek Dutta
16
0
0
02 Aug 2024
Provable Optimization for Adversarial Fair Self-supervised Contrastive
  Learning
Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning
Qi Qi
Quanqi Hu
Qihang Lin
Tianbao Yang
37
1
0
09 Jun 2024
From Inverse Optimization to Feasibility to ERM
From Inverse Optimization to Feasibility to ERM
Saurabh Mishra
Anant Raj
Sharan Vaswani
25
2
0
27 Feb 2024
Challenges in Training PINNs: A Loss Landscape Perspective
Challenges in Training PINNs: A Loss Landscape Perspective
Pratik Rathore
Weimu Lei
Zachary Frangella
Lu Lu
Madeleine Udell
AI4CE
PINN
ODL
44
39
0
02 Feb 2024
A Theoretical Analysis of Noise Geometry in Stochastic Gradient Descent
A Theoretical Analysis of Noise Geometry in Stochastic Gradient Descent
Mingze Wang
Lei Wu
33
3
0
01 Oct 2023
No Wrong Turns: The Simple Geometry Of Neural Networks Optimization
  Paths
No Wrong Turns: The Simple Geometry Of Neural Networks Optimization Paths
Charles Guille-Escuret
Hiroki Naganuma
Kilian Fatras
Ioannis Mitliagkas
16
3
0
20 Jun 2023
How much pre-training is enough to discover a good subnetwork?
How much pre-training is enough to discover a good subnetwork?
Cameron R. Wolfe
Fangshuo Liao
Qihan Wang
J. Kim
Anastasios Kyrillidis
30
3
0
31 Jul 2021
Stochastic algorithms with geometric step decay converge linearly on
  sharp functions
Stochastic algorithms with geometric step decay converge linearly on sharp functions
Damek Davis
Dmitriy Drusvyatskiy
Vasileios Charisopoulos
36
26
0
22 Jul 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
Learning without Concentration
Learning without Concentration
S. Mendelson
85
333
0
01 Jan 2014
1