ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.10300
  4. Cited By
Sparse Optimization on Measures with Over-parameterized Gradient Descent

Sparse Optimization on Measures with Over-parameterized Gradient Descent

24 July 2019
Lénaïc Chizat
ArXivPDFHTML

Papers citing "Sparse Optimization on Measures with Over-parameterized Gradient Descent"

17 / 17 papers shown
Title
Hellinger-Kantorovich Gradient Flows: Global Exponential Decay of Entropy Functionals
Hellinger-Kantorovich Gradient Flows: Global Exponential Decay of Entropy Functionals
Alexander Mielke
Jia Jie Zhu
58
1
0
28 Jan 2025
Global Optimality of Elman-type RNN in the Mean-Field Regime
Global Optimality of Elman-type RNN in the Mean-Field Regime
Andrea Agazzi
Jian-Xiong Lu
Sayan Mukherjee
MLT
26
1
0
12 Mar 2023
An Explicit Expansion of the Kullback-Leibler Divergence along its
  Fisher-Rao Gradient Flow
An Explicit Expansion of the Kullback-Leibler Divergence along its Fisher-Rao Gradient Flow
Carles Domingo-Enrich
Aram-Alexandre Pooladian
MDE
14
11
0
23 Feb 2023
Convergence beyond the over-parameterized regime using Rayleigh
  quotients
Convergence beyond the over-parameterized regime using Rayleigh quotients
David A. R. Robin
Kevin Scaman
Marc Lelarge
22
3
0
19 Jan 2023
Unbalanced Optimal Transport, from Theory to Numerics
Unbalanced Optimal Transport, from Theory to Numerics
Thibault Séjourné
Gabriel Peyré
Franccois-Xavier Vialard
OT
25
47
0
16 Nov 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
M. Wyart
MLT
31
23
0
24 Jun 2022
Provable Acceleration of Heavy Ball beyond Quadratics for a Class of
  Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out
Provable Acceleration of Heavy Ball beyond Quadratics for a Class of Polyak-Łojasiewicz Functions when the Non-Convexity is Averaged-Out
Jun-Kun Wang
Chi-Heng Lin
Andre Wibisono
Bin Hu
26
20
0
22 Jun 2022
Convex Analysis of the Mean Field Langevin Dynamics
Convex Analysis of the Mean Field Langevin Dynamics
Atsushi Nitanda
Denny Wu
Taiji Suzuki
MLT
59
64
0
25 Jan 2022
Parallel Deep Neural Networks Have Zero Duality Gap
Parallel Deep Neural Networks Have Zero Duality Gap
Yifei Wang
Tolga Ergen
Mert Pilanci
79
10
0
13 Oct 2021
Convergence analysis for gradient flows in the training of artificial
  neural networks with ReLU activation
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
Arnulf Jentzen
Adrian Riekert
19
23
0
09 Jul 2021
Global Convergence of Three-layer Neural Networks in the Mean Field
  Regime
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
H. Pham
Phan-Minh Nguyen
MLT
AI4CE
41
19
0
11 May 2021
Global optimality of softmax policy gradient with single hidden layer
  neural networks in the mean-field regime
Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime
Andrea Agazzi
Jianfeng Lu
13
15
0
22 Oct 2020
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
Alain Durmus
Xavier Fontaine
Umut Simsekli
19
25
0
13 Jul 2020
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable
  Optimization Via Overparameterization From Depth
A Mean-field Analysis of Deep ResNet and Beyond: Towards Provable Optimization Via Overparameterization From Depth
Yiping Lu
Chao Ma
Yulong Lu
Jianfeng Lu
Lexing Ying
MLT
33
78
0
11 Mar 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks
  Trained with the Logistic Loss
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
16
327
0
11 Feb 2020
Sinkhorn Divergences for Unbalanced Optimal Transport
Sinkhorn Divergences for Unbalanced Optimal Transport
Thibault Séjourné
Jean Feydy
Franccois-Xavier Vialard
A. Trouvé
Gabriel Peyré
OT
14
71
0
28 Oct 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
136
1,198
0
16 Aug 2016
1