ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.02178
  4. Cited By
Fantastic Generalization Measures and Where to Find Them

Fantastic Generalization Measures and Where to Find Them

4 December 2019
Yiding Jiang
Behnam Neyshabur
H. Mobahi
Dilip Krishnan
Samy Bengio
    AI4CE
ArXivPDFHTML

Papers citing "Fantastic Generalization Measures and Where to Find Them"

29 / 179 papers shown
Title
Adversarial Training Makes Weight Loss Landscape Sharper in Logistic
  Regression
Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression
Masanori Yamada
Sekitoshi Kanai
Tomoharu Iwata
Tomokatsu Takahashi
Yuki Yamanaka
Hiroshi Takahashi
Atsutoshi Kumagai
AAML
50
9
0
05 Feb 2021
Applying Deutsch's concept of good explanations to artificial
  intelligence and neuroscience -- an initial exploration
Applying Deutsch's concept of good explanations to artificial intelligence and neuroscience -- an initial exploration
Daniel C. Elton
37
4
0
16 Dec 2020
NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
Yiding Jiang
Pierre Foret
Scott Yak
Daniel M. Roy
H. Mobahi
Gintare Karolina Dziugaite
Samy Bengio
Suriya Gunasekar
Isabelle M Guyon
Behnam Neyshabur Google Research
OOD
31
55
0
14 Dec 2020
Predicting Generalization in Deep Learning via Local Measures of
  Distortion
Predicting Generalization in Deep Learning via Local Measures of Distortion
Abhejit Rajagopal
Vamshi C. Madala
S. Chandrasekaran
P. Larson
18
1
0
13 Dec 2020
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous
  Neural Networks
The Implicit Bias for Adaptive Optimization Algorithms on Homogeneous Neural Networks
Bohan Wang
Qi Meng
Wei Chen
Tie-Yan Liu
35
33
0
11 Dec 2020
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient
  Descent
Noise and Fluctuation of Finite Learning Rate Stochastic Gradient Descent
Kangqiao Liu
Liu Ziyin
Masakuni Ueda
MLT
71
38
0
07 Dec 2020
A Bayesian Perspective on Training Speed and Model Selection
A Bayesian Perspective on Training Speed and Model Selection
Clare Lyle
Lisa Schut
Binxin Ru
Y. Gal
Mark van der Wilk
51
24
0
27 Oct 2020
The Deep Bootstrap Framework: Good Online Learners are Good Offline
  Generalizers
The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
Preetum Nakkiran
Behnam Neyshabur
Hanie Sedghi
OffRL
34
11
0
16 Oct 2020
Sharpness-Aware Minimization for Efficiently Improving Generalization
Sharpness-Aware Minimization for Efficiently Improving Generalization
Pierre Foret
Ariel Kleiner
H. Mobahi
Behnam Neyshabur
AAML
132
1,304
0
03 Oct 2020
Data-Efficient Pretraining via Contrastive Self-Supervision
Data-Efficient Pretraining via Contrastive Self-Supervision
Nils Rethmeier
Isabelle Augenstein
36
20
0
02 Oct 2020
Learning Optimal Representations with the Decodable Information
  Bottleneck
Learning Optimal Representations with the Decodable Information Bottleneck
Yann Dubois
Douwe Kiela
D. Schwab
Ramakrishna Vedantam
40
43
0
27 Sep 2020
Analysis of Generalizability of Deep Neural Networks Based on the
  Complexity of Decision Boundary
Analysis of Generalizability of Deep Neural Networks Based on the Complexity of Decision Boundary
Shuyue Guan
Murray H. Loew
35
26
0
16 Sep 2020
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
51
448
0
09 Aug 2020
Is SGD a Bayesian sampler? Well, almost
Is SGD a Bayesian sampler? Well, almost
Chris Mingard
Guillermo Valle Pérez
Joar Skalse
A. Louis
BDL
31
52
0
26 Jun 2020
Entropic gradient descent algorithms and wide flat minima
Entropic gradient descent algorithms and wide flat minima
Fabrizio Pittorino
Carlo Lucibello
Christoph Feinauer
Gabriele Perugini
Carlo Baldassi
Elizaveta Demyanenko
R. Zecchina
ODL
MLT
52
33
0
14 Jun 2020
Directional convergence and alignment in deep learning
Directional convergence and alignment in deep learning
Ziwei Ji
Matus Telgarsky
28
166
0
11 Jun 2020
Speedy Performance Estimation for Neural Architecture Search
Speedy Performance Estimation for Neural Architecture Search
Binxin Ru
Clare Lyle
Lisa Schut
M. Fil
Mark van der Wilk
Y. Gal
36
37
0
08 Jun 2020
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
162
236
0
04 Mar 2020
Rethinking Parameter Counting in Deep Models: Effective Dimensionality
  Revisited
Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Wesley J. Maddox
Gregory W. Benton
A. Wilson
25
61
0
04 Mar 2020
Predicting Neural Network Accuracy from Weights
Predicting Neural Network Accuracy from Weights
Thomas Unterthiner
Daniel Keysers
Sylvain Gelly
Olivier Bousquet
Ilya O. Tolstikhin
30
101
0
26 Feb 2020
The Two Regimes of Deep Network Training
The Two Regimes of Deep Network Training
Guillaume Leclerc
Aleksander Madry
32
45
0
24 Feb 2020
The Break-Even Point on Optimization Trajectories of Deep Neural
  Networks
The Break-Even Point on Optimization Trajectories of Deep Neural Networks
Stanislaw Jastrzebski
Maciej Szymczak
Stanislav Fort
Devansh Arpit
Jacek Tabor
Kyunghyun Cho
Krzysztof J. Geras
55
156
0
21 Feb 2020
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
A. Wilson
Pavel Izmailov
UQCV
BDL
OOD
29
647
0
20 Feb 2020
Optimized Generic Feature Learning for Few-shot Classification across
  Domains
Optimized Generic Feature Learning for Few-shot Classification across Domains
Tonmoy Saikia
Thomas Brox
Cordelia Schmid
VLM
35
48
0
22 Jan 2020
The Generalization-Stability Tradeoff In Neural Network Pruning
The Generalization-Stability Tradeoff In Neural Network Pruning
Brian Bartoldson
Ari S. Morcos
Adrian Barbu
G. Erlebacher
38
73
0
09 Jun 2019
Generalization bounds for deep convolutional neural networks
Generalization bounds for deep convolutional neural networks
Philip M. Long
Hanie Sedghi
MLT
42
90
0
29 May 2019
Normalized Flat Minima: Exploring Scale Invariant Definition of Flat
  Minima for Neural Networks using PAC-Bayesian Analysis
Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
35
76
0
15 Jan 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
318
2,908
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
131
581
0
27 Feb 2015
Previous
1234