Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.11039
Cited By
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
28 December 2018
Dawei Li
Tian Ding
Ruoyu Sun
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Benefit of Width for Neural Networks: Disappearance of Bad Basins"
9 / 9 papers shown
Title
Analyzing the Role of Permutation Invariance in Linear Mode Connectivity
Keyao Zhan
Puheng Li
Lei Wu
MoMe
82
0
0
13 Mar 2025
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
34
19
0
06 Apr 2023
When Expressivity Meets Trainability: Fewer than
n
n
n
Neurons Can Work
Jiawei Zhang
Yushun Zhang
Mingyi Hong
Ruoyu Sun
Zhi-Quan Luo
26
10
0
21 Oct 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
22
2
0
21 Mar 2022
On the Proof of Global Convergence of Gradient Descent for Deep ReLU Networks with Linear Widths
Quynh N. Nguyen
38
49
0
24 Jan 2021
Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses
Charles G. Frye
James B. Simon
Neha S. Wadia
A. Ligeralde
M. DeWeese
K. Bouchard
ODL
16
2
0
23 Mar 2020
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
Shiyu Liang
Ruoyu Sun
R. Srikant
32
19
0
31 Dec 2019
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
125
117
0
08 Jul 2017
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1