ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.10095
  4. Cited By
Landscape Connectivity and Dropout Stability of SGD Solutions for
  Over-parameterized Neural Networks

Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks

20 December 2019
A. Shevchenko
Marco Mondelli
ArXivPDFHTML

Papers citing "Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks"

8 / 8 papers shown
Title
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
79
0
0
06 May 2025
Mode Connectivity in Auction Design
Mode Connectivity in Auction Design
Christoph Hertrich
Yixin Tao
László A. Végh
16
1
0
18 May 2023
On Quantum Speedups for Nonconvex Optimization via Quantum Tunneling
  Walks
On Quantum Speedups for Nonconvex Optimization via Quantum Tunneling Walks
Yizhou Liu
Weijie J. Su
Tongyang Li
16
17
0
29 Sep 2022
Mode connectivity in the loss landscape of parameterized quantum
  circuits
Mode connectivity in the loss landscape of parameterized quantum circuits
Kathleen E. Hamilton
E. Lynn
R. Pooser
17
3
0
09 Nov 2021
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks
A. Shevchenko
Vyacheslav Kungurtsev
Marco Mondelli
MLT
20
13
0
03 Nov 2021
Global Convergence of Three-layer Neural Networks in the Mean Field
  Regime
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
H. Pham
Phan-Minh Nguyen
MLT
AI4CE
33
19
0
11 May 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,886
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
175
1,185
0
30 Nov 2014
1