ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.09913
  4. Cited By
Visualizing the Loss Landscape of Neural Nets

Visualizing the Loss Landscape of Neural Nets

28 December 2017
Hao Li
Zheng Xu
Gavin Taylor
Christoph Studer
Tom Goldstein
ArXivPDFHTML

Papers citing "Visualizing the Loss Landscape of Neural Nets"

50 / 1,039 papers shown
Title
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
26
79
0
17 Sep 2020
Effective Federated Adaptive Gradient Methods with Non-IID Decentralized
  Data
Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data
Qianqian Tong
Guannan Liang
J. Bi
FedML
33
27
0
14 Sep 2020
Deforming the Loss Surface to Affect the Behaviour of the Optimizer
Deforming the Loss Surface to Affect the Behaviour of the Optimizer
Liangming Chen
Long Jin
Xiujuan Du
Shuai Li
Mei Liu
ODL
8
2
0
14 Sep 2020
On the Orthogonality of Knowledge Distillation with Other Techniques:
  From an Ensemble Perspective
On the Orthogonality of Knowledge Distillation with Other Techniques: From an Ensemble Perspective
Seonguk Park
Kiyoon Yoo
Nojun Kwak
FedML
13
3
0
09 Sep 2020
Going deeper with brain morphometry using neural networks
Going deeper with brain morphometry using neural networks
Rodrigo Santa Cruz
Leo Lebrat
Pierrick Bourgeat
Vincent Doré
Jason Dowling
Jurgen Fripp
Clinton Fookes
Olivier Salvado
3DV
MedIm
13
6
0
07 Sep 2020
S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise
  Injection for Reaching Flat Minima
S-SGD: Symmetrical Stochastic Gradient Descent with Weight Noise Injection for Reaching Flat Minima
Wonyong Sung
Iksoo Choi
Jinhwan Park
Seokhyun Choi
Sungho Shin
ODL
17
7
0
05 Sep 2020
Visualizing the Loss Landscape of Actor Critic Methods with Applications
  in Inventory Optimization
Visualizing the Loss Landscape of Actor Critic Methods with Applications in Inventory Optimization
Recep Yusuf Bekci
M. Gümüş
12
4
0
04 Sep 2020
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators
Xiangxiang Chu
Xiaoxing Wang
Bo-Wen Zhang
Shun Lu
Xiaolin K. Wei
Junchi Yan
4
161
0
02 Sep 2020
A Primer on Motion Capture with Deep Learning: Principles, Pitfalls and
  Perspectives
A Primer on Motion Capture with Deep Learning: Principles, Pitfalls and Perspectives
Alexander Mathis
Steffen Schneider
Jessy Lauer
Mackenzie W. Mathis
22
165
0
01 Sep 2020
Adversarially Robust Learning via Entropic Regularization
Adversarially Robust Learning via Entropic Regularization
Gauri Jagatap
Ameya Joshi
A. B. Chowdhury
S. Garg
C. Hegde
OOD
25
11
0
27 Aug 2020
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial
  Defenses
Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses
Fu-Huei Lin
Rohit Mittapalli
Prithvijit Chattopadhyay
Daniel Bolya
Judy Hoffman
AAML
40
2
0
25 Aug 2020
Prevention is Better than Cure: Handling Basis Collapse and Transparency
  in Dense Networks
Prevention is Better than Cure: Handling Basis Collapse and Transparency in Dense Networks
Gurpreet Singh
Soumyajit Gupta
Clint Dawson
AI4CE
12
0
0
22 Aug 2020
DronePose: Photorealistic UAV-Assistant Dataset Synthesis for 3D Pose
  Estimation via a Smooth Silhouette Loss
DronePose: Photorealistic UAV-Assistant Dataset Synthesis for 3D Pose Estimation via a Smooth Silhouette Loss
G. Albanis
N. Zioulis
A. Dimou
D. Zarpalas
P. Daras
3DH
17
10
0
20 Aug 2020
Neural Complexity Measures
Neural Complexity Measures
Yoonho Lee
Juho Lee
Sung Ju Hwang
Eunho Yang
Seungjin Choi
20
8
0
07 Aug 2020
Continuous-in-Depth Neural Networks
Continuous-in-Depth Neural Networks
A. Queiruga
N. Benjamin Erichson
D. Taylor
Michael W. Mahoney
16
45
0
05 Aug 2020
A Neural-Symbolic Framework for Mental Simulation
A Neural-Symbolic Framework for Mental Simulation
Michael D Kissner
14
0
0
05 Aug 2020
Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale
  of Symmetry
Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale of Symmetry
Yossi Arjevani
M. Field
8
16
0
04 Aug 2020
Shallow Univariate ReLu Networks as Splines: Initialization, Loss
  Surface, Hessian, & Gradient Flow Dynamics
Shallow Univariate ReLu Networks as Splines: Initialization, Loss Surface, Hessian, & Gradient Flow Dynamics
Justin Sahs
Ryan Pyle
Aneel Damaraju
J. O. Caro
Onur Tavaslioglu
Andy Lu
Ankit B. Patel
11
19
0
04 Aug 2020
MLR-SNet: Transferable LR Schedules for Heterogeneous Tasks
MLR-SNet: Transferable LR Schedules for Heterogeneous Tasks
Jun Shu
Yanwen Zhu
Qian Zhao
Zongben Xu
Deyu Meng
18
7
0
29 Jul 2020
Deforming the Loss Surface
Liangming Chen
Long Jin
Xiujuan Du
Shuai Li
Mei Liu
ODL
6
0
0
24 Jul 2020
The Representation Theory of Neural Networks
The Representation Theory of Neural Networks
M. Armenta
Pierre-Marc Jodoin
19
30
0
23 Jul 2020
Deep Learning Based Brain Tumor Segmentation: A Survey
Deep Learning Based Brain Tumor Segmentation: A Survey
Zhihua Liu
Lei Tong
Zheheng Jiang
Long Chen
Feixiang Zhou
Qianni Zhang
Xiangrong Zhang
Ling Li
Huiyu Zhou
3DV
24
227
0
18 Jul 2020
Data-driven effective model shows a liquid-like deep learning
Data-driven effective model shows a liquid-like deep learning
Wenxuan Zou
Haiping Huang
18
2
0
16 Jul 2020
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Quantitative Propagation of Chaos for SGD in Wide Neural Networks
Valentin De Bortoli
Alain Durmus
Xavier Fontaine
Umut Simsekli
16
25
0
13 Jul 2020
RicciNets: Curvature-guided Pruning of High-performance Neural Networks
  Using Ricci Flow
RicciNets: Curvature-guided Pruning of High-performance Neural Networks Using Ricci Flow
Samuel Glass
Simeon E. Spasov
Pietro Lió
12
5
0
08 Jul 2020
Model-based Clustering using Automatic Differentiation: Confronting
  Misspecification and High-Dimensional Data
Model-based Clustering using Automatic Differentiation: Confronting Misspecification and High-Dimensional Data
Siva Rajesh Kasa
Vaibhav Rajan
9
0
0
08 Jul 2020
The Global Landscape of Neural Networks: An Overview
The Global Landscape of Neural Networks: An Overview
Ruoyu Sun
Dawei Li
Shiyu Liang
Tian Ding
R. Srikant
20
82
0
02 Jul 2020
Persistent Neurons
Persistent Neurons
Yimeng Min
19
0
0
02 Jul 2020
Learn Faster and Forget Slower via Fast and Stable Task Adaptation
Learn Faster and Forget Slower via Fast and Stable Task Adaptation
Farshid Varno
Lucas May Petry
Lisa Di-Jorio
Stan Matwin
CLL
14
2
0
02 Jul 2020
Go Wide, Then Narrow: Efficient Training of Deep Thin Networks
Go Wide, Then Narrow: Efficient Training of Deep Thin Networks
Denny Zhou
Mao Ye
Chen Chen
Tianjian Meng
Mingxing Tan
Xiaodan Song
Quoc V. Le
Qiang Liu
Dale Schuurmans
6
19
0
01 Jul 2020
On the Demystification of Knowledge Distillation: A Residual Network
  Perspective
On the Demystification of Knowledge Distillation: A Residual Network Perspective
N. Jha
Rajat Saini
Sparsh Mittal
13
4
0
30 Jun 2020
Gradient-only line searches to automatically determine learning rates
  for a variety of stochastic training algorithms
Gradient-only line searches to automatically determine learning rates for a variety of stochastic training algorithms
D. Kafka
D. Wilke
ODL
20
0
0
29 Jun 2020
Lipschitzness Is All You Need To Tame Off-policy Generative Adversarial
  Imitation Learning
Lipschitzness Is All You Need To Tame Off-policy Generative Adversarial Imitation Learning
Lionel Blondé
Pablo Strasser
Alexandros Kalousis
25
22
0
28 Jun 2020
Smooth Adversarial Training
Smooth Adversarial Training
Cihang Xie
Mingxing Tan
Boqing Gong
Alan Yuille
Quoc V. Le
OOD
19
152
0
25 Jun 2020
Dynamic of Stochastic Gradient Descent with State-Dependent Noise
Dynamic of Stochastic Gradient Descent with State-Dependent Noise
Qi Meng
Shiqi Gong
Wei Chen
Zhi-Ming Ma
Tie-Yan Liu
6
16
0
24 Jun 2020
Learning Potentials of Quantum Systems using Deep Neural Networks
Learning Potentials of Quantum Systems using Deep Neural Networks
Arijit Sehanobish
H. Corzo
Onur Kara
David van Dijk
6
12
0
23 Jun 2020
IDF++: Analyzing and Improving Integer Discrete Flows for Lossless
  Compression
IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression
Rianne van den Berg
A. Gritsenko
Mostafa Dehghani
C. Sønderby
Tim Salimans
19
59
0
22 Jun 2020
On the alpha-loss Landscape in the Logistic Model
On the alpha-loss Landscape in the Logistic Model
Tyler Sypherd
Mario Díaz
Lalitha Sankar
Gautam Dasarathy
6
5
0
22 Jun 2020
MaxVA: Fast Adaptation of Step Sizes by Maximizing Observed Variance of
  Gradients
MaxVA: Fast Adaptation of Step Sizes by Maximizing Observed Variance of Gradients
Chenfei Zhu
Yu Cheng
Zhe Gan
Furong Huang
Jingjing Liu
Tom Goldstein
ODL
25
2
0
21 Jun 2020
On the Almost Sure Convergence of Stochastic Gradient Descent in
  Non-Convex Problems
On the Almost Sure Convergence of Stochastic Gradient Descent in Non-Convex Problems
P. Mertikopoulos
Nadav Hallak
Ali Kavis
V. Cevher
11
85
0
19 Jun 2020
FrostNet: Towards Quantization-Aware Network Architecture Search
FrostNet: Towards Quantization-Aware Network Architecture Search
Taehoon Kim
Y. Yoo
Jihoon Yang
MQ
17
2
0
17 Jun 2020
Learning Rates as a Function of Batch Size: A Random Matrix Theory
  Approach to Neural Network Training
Learning Rates as a Function of Batch Size: A Random Matrix Theory Approach to Neural Network Training
Diego Granziol
S. Zohren
Stephen J. Roberts
ODL
29
48
0
16 Jun 2020
SPLASH: Learnable Activation Functions for Improving Accuracy and
  Adversarial Robustness
SPLASH: Learnable Activation Functions for Improving Accuracy and Adversarial Robustness
Mohammadamin Tavakoli
Forest Agostinelli
Pierre Baldi
AAML
FAtt
22
39
0
16 Jun 2020
Feature Space Saturation during Training
Feature Space Saturation during Training
Mats L. Richter
Justin Shenk
Wolf Byttner
Anders Arpteg
Mikael Huss
FAtt
11
6
0
15 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
19
81
0
15 Jun 2020
Understanding Global Loss Landscape of One-hidden-layer ReLU Networks,
  Part 2: Experiments and Analysis
Understanding Global Loss Landscape of One-hidden-layer ReLU Networks, Part 2: Experiments and Analysis
Bo Liu
8
1
0
15 Jun 2020
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and
  Architectures
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
Jeongun Ryu
Jaewoong Shin
Haebeom Lee
Sung Ju Hwang
AAML
OOD
14
8
0
13 Jun 2020
Is the Skip Connection Provable to Reform the Neural Network Loss
  Landscape?
Is the Skip Connection Provable to Reform the Neural Network Loss Landscape?
Lifu Wang
Bo Shen
Ningrui Zhao
Zhiyuan Zhang
8
16
0
10 Jun 2020
Interpolation between Residual and Non-Residual Networks
Interpolation between Residual and Non-Residual Networks
Zonghan Yang
Yang Liu
Chenglong Bao
Zuoqiang Shi
12
11
0
10 Jun 2020
Extrapolation for Large-batch Training in Deep Learning
Extrapolation for Large-batch Training in Deep Learning
Tao R. Lin
Lingjing Kong
Sebastian U. Stich
Martin Jaggi
12
36
0
10 Jun 2020
Previous
123...161718192021
Next