ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.10749
  4. Cited By
On the loss landscape of a class of deep neural networks with no bad
  local valleys
v1v2 (latest)

On the loss landscape of a class of deep neural networks with no bad local valleys

27 September 2018
Quynh N. Nguyen
Mahesh Chandra Mukkamala
Matthias Hein
ArXiv (abs)PDFHTML

Papers citing "On the loss landscape of a class of deep neural networks with no bad local valleys"

10 / 60 papers shown
Title
Convergence Rates of Variational Inference in Sparse Deep Learning
Convergence Rates of Variational Inference in Sparse Deep Learning
Badr-Eddine Chérief-Abdellatif
BDL
180
40
0
09 Aug 2019
Are deep ResNets provably better than linear predictors?
Are deep ResNets provably better than linear predictors?
Chulhee Yun
S. Sra
Ali Jadbabaie
207
13
0
09 Jul 2019
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer
  Nets
Explaining Landscape Connectivity of Low-cost Solutions for Multilayer Nets
Rohith Kuditipudi
Xiang Wang
Holden Lee
Yi Zhang
Zhiyuan Li
Wei Hu
Sanjeev Arora
Rong Ge
FAtt
188
95
0
14 Jun 2019
Machine Learning and System Identification for Estimation in Physical
  Systems
Machine Learning and System Identification for Estimation in Physical Systems
Fredrik Bagge Carlson
OOD
92
5
0
05 Jun 2019
Traversing the noise of dynamic mini-batch sub-sampled loss functions: A
  visual guide
Traversing the noise of dynamic mini-batch sub-sampled loss functions: A visual guide
D. Kafka
D. Wilke
86
0
0
20 Mar 2019
On Connected Sublevel Sets in Deep Learning
On Connected Sublevel Sets in Deep Learning
Quynh N. Nguyen
223
105
0
22 Jan 2019
Elimination of All Bad Local Minima in Deep Learning
Elimination of All Bad Local Minima in Deep Learning
Kenji Kawaguchi
L. Kaelbling
172
45
0
02 Jan 2019
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Ruoyu Sun
247
41
0
28 Dec 2018
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Henning Petzka
C. Sminchisescu
170
10
0
16 Dec 2018
A Priori Estimates of the Population Risk for Two-layer Neural Networks
A Priori Estimates of the Population Risk for Two-layer Neural Networks
Weinan E
Chao Ma
Lei Wu
131
139
0
15 Oct 2018
Previous
12