ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.04724
  4. Cited By
Large Scale Structure of Neural Network Loss Landscapes

Large Scale Structure of Neural Network Loss Landscapes

11 June 2019
Stanislav Fort
Stanislaw Jastrzebski
ArXivPDFHTML

Papers citing "Large Scale Structure of Neural Network Loss Landscapes"

24 / 24 papers shown
Title
Analyzing the Role of Permutation Invariance in Linear Mode Connectivity
Keyao Zhan
Puheng Li
Lei Wu
MoMe
79
0
0
13 Mar 2025
CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks
CreINNs: Credal-Set Interval Neural Networks for Uncertainty Estimation in Classification Tasks
Kaizheng Wang
Keivan K1 Shariatmadar
Shireen Kudukkil Manchingal
Fabio Cuzzolin
David Moens
Hans Hallez
UQCV
BDL
92
12
0
28 Jan 2025
Reinforcement Teaching
Reinforcement Teaching
Alex Lewandowski
Calarina Muslimani
Dale Schuurmans
Matthew E. Taylor
Jun-Jie Luo
76
1
0
28 Jan 2025
Input Space Mode Connectivity in Deep Neural Networks
Input Space Mode Connectivity in Deep Neural Networks
Jakub Vrabel
Ori Shem-Ur
Yaron Oz
David Krueger
48
1
0
09 Sep 2024
A survey of deep learning optimizers -- first and second order methods
A survey of deep learning optimizers -- first and second order methods
Rohan Kashyap
ODL
29
6
0
28 Nov 2022
Multiple Modes for Continual Learning
Multiple Modes for Continual Learning
Siddhartha Datta
N. Shadbolt
CLL
MoMe
41
2
0
29 Sep 2022
A Closer Look at Learned Optimization: Stability, Robustness, and
  Inductive Biases
A Closer Look at Learned Optimization: Stability, Robustness, and Inductive Biases
James Harrison
Luke Metz
Jascha Narain Sohl-Dickstein
44
22
0
22 Sep 2022
Linear Connectivity Reveals Generalization Strategies
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
232
45
0
24 May 2022
Interpolating Compressed Parameter Subspaces
Interpolating Compressed Parameter Subspaces
Siddhartha Datta
N. Shadbolt
32
5
0
19 May 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent
  Backdoor Attacks
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
21
6
0
07 Mar 2022
When Do Flat Minima Optimizers Work?
When Do Flat Minima Optimizers Work?
Jean Kaddour
Linqing Liu
Ricardo M. A. Silva
Matt J. Kusner
ODL
11
58
0
01 Feb 2022
Towards Better Plasticity-Stability Trade-off in Incremental Learning: A
  Simple Linear Connector
Towards Better Plasticity-Stability Trade-off in Incremental Learning: A Simple Linear Connector
Guoliang Lin
Hanlu Chu
Hanjiang Lai
MoMe
CLL
29
43
0
15 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural
  Networks
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
35
215
0
12 Oct 2021
Connecting Low-Loss Subspace for Personalized Federated Learning
Connecting Low-Loss Subspace for Personalized Federated Learning
S. Hahn
Minwoo Jeong
Junghye Lee
FedML
14
18
0
16 Sep 2021
What can linear interpolation of neural network loss landscapes tell us?
What can linear interpolation of neural network loss landscapes tell us?
Tiffany J. Vlaar
Jonathan Frankle
MoMe
22
27
0
30 Jun 2021
Extracting Global Dynamics of Loss Landscape in Deep Learning Models
Extracting Global Dynamics of Loss Landscape in Deep Learning Models
Mohammed Eslami
Hamed Eramian
Marcio Gameiro
W. Kalies
Konstantin Mischaikow
16
1
0
14 Jun 2021
Priors in Bayesian Deep Learning: A Review
Priors in Bayesian Deep Learning: A Review
Vincent Fortuin
UQCV
BDL
29
124
0
14 May 2021
Learning Neural Network Subspaces
Learning Neural Network Subspaces
Mitchell Wortsman
Maxwell Horton
Carlos Guestrin
Ali Farhadi
Mohammad Rastegari
UQCV
16
85
0
20 Feb 2021
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
19
81
0
15 Jun 2020
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
A. Wilson
Pavel Izmailov
UQCV
BDL
OOD
17
639
0
20 Feb 2020
Deep Ensembles: A Loss Landscape Perspective
Deep Ensembles: A Loss Landscape Perspective
Stanislav Fort
Huiyi Hu
Balaji Lakshminarayanan
OOD
UQCV
13
617
0
05 Dec 2019
Stiffness: A New Perspective on Generalization in Neural Networks
Stiffness: A New Perspective on Generalization in Neural Networks
Stanislav Fort
Pawel Krzysztof Nowak
Stanislaw Jastrzebski
S. Narayanan
19
94
0
28 Jan 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
278
2,888
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
1