ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.04998
  4. Cited By
When does gradient descent with logistic loss interpolate using deep
  networks with smoothed ReLU activations?
v1v2 (latest)

When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?

Annual Conference Computational Learning Theory (COLT), 2021
9 February 2021
Niladri S. Chatterji
Philip M. Long
Peter L. Bartlett
ArXiv (abs)PDFHTML

Papers citing "When does gradient descent with logistic loss interpolate using deep networks with smoothed ReLU activations?"

8 / 8 papers shown
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks:
  Margin Improvement and Fast Optimization
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
Yuhang Cai
Jingfeng Wu
Song Mei
Michael Lindsey
Peter L. Bartlett
421
14
0
12 Jun 2024
Generalization of Scaled Deep ResNets in the Mean-Field Regime
Generalization of Scaled Deep ResNets in the Mean-Field RegimeInternational Conference on Learning Representations (ICLR), 2024
Yihang Chen
Fanghui Liu
Yiping Lu
Grigorios G. Chrysos
Volkan Cevher
300
2
0
14 Mar 2024
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
Global Convergence of SGD For Logistic Loss on Two Layer Neural Nets
Pulkit Gopalani
Samyak Jha
Anirbit Mukherjee
335
3
0
17 Sep 2023
Fast Convergence in Learning Two-Layer Neural Networks with Separable
  Data
Fast Convergence in Learning Two-Layer Neural Networks with Separable DataAAAI Conference on Artificial Intelligence (AAAI), 2023
Hossein Taheri
Christos Thrampoulidis
MLT
289
4
0
22 May 2023
On Feature Learning in Neural Networks with Global Convergence
  Guarantees
On Feature Learning in Neural Networks with Global Convergence GuaranteesInternational Conference on Learning Representations (ICLR), 2022
Zhengdao Chen
Eric Vanden-Eijnden
Joan Bruna
MLT
337
15
0
22 Apr 2022
On the Global Convergence of Gradient Descent for multi-layer ResNets in
  the mean-field regime
On the Global Convergence of Gradient Descent for multi-layer ResNets in the mean-field regime
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
MLTAI4CE
277
10
0
06 Oct 2021
Overparameterization of deep ResNet: zero loss and mean-field analysis
Overparameterization of deep ResNet: zero loss and mean-field analysisJournal of machine learning research (JMLR), 2021
Zhiyan Ding
Shi Chen
Qin Li
S. Wright
ODL
345
28
0
30 May 2021
Properties of the After Kernel
Properties of the After Kernel
Philip M. Long
326
30
0
21 May 2021
1
Page 1 of 1