ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.06398
  4. Cited By
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
v1v2 (latest)

Implicit Regularization in Deep Learning May Not Be Explainable by Norms

13 May 2020
Noam Razin
Nadav Cohen
ArXiv (abs)PDFHTML

Papers citing "Implicit Regularization in Deep Learning May Not Be Explainable by Norms"

50 / 112 papers shown
Title
Learning Low Dimensional State Spaces with Overparameterized Recurrent
  Neural Nets
Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural NetsInternational Conference on Learning Representations (ICLR), 2022
Edo Cohen-Karlik
Itamar Menuhin-Gruman
Raja Giryes
Nadav Cohen
Amir Globerson
255
7
0
25 Oct 2022
Deep Linear Networks for Matrix Completion -- An Infinite Depth Limit
Deep Linear Networks for Matrix Completion -- An Infinite Depth LimitSIAM Journal on Applied Dynamical Systems (SIADS), 2022
Nadav Cohen
Govind Menon
Zsolt Veraszto
ODL
123
10
0
22 Oct 2022
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data
Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional DataInternational Conference on Learning Representations (ICLR), 2022
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
Wei Hu
MLT
151
48
0
13 Oct 2022
Self-supervised debiasing using low rank regularization
Self-supervised debiasing using low rank regularizationComputer Vision and Pattern Recognition (CVPR), 2022
Geon Yeong Park
Chanyong Jung
Sangmin Lee
Jong Chul Ye
Sang Wan Lee
CMLSSL
178
4
0
11 Oct 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones DoJournal of machine learning research (JMLR), 2022
Niladri S. Chatterji
Philip M. Long
151
10
0
19 Sep 2022
On the Implicit Bias in Deep-Learning Algorithms
On the Implicit Bias in Deep-Learning AlgorithmsCommunications of the ACM (CACM), 2022
Gal Vardi
FedMLAI4CE
212
102
0
26 Aug 2022
Explicit Use of Fourier Spectrum in Generative Adversarial Networks
Explicit Use of Fourier Spectrum in Generative Adversarial Networks
Soroush Sheikh Gargar
GANOOD
90
1
0
02 Aug 2022
Implicit Regularization with Polynomial Growth in Deep Tensor
  Factorization
Implicit Regularization with Polynomial Growth in Deep Tensor FactorizationInternational Conference on Machine Learning (ICML), 2022
Kais Hariz
Hachem Kadri
Stéphane Ayache
Maher Moakher
Thierry Artières
96
4
0
18 Jul 2022
Implicit Bias of Gradient Descent on Reparametrized Models: On
  Equivalence to Mirror Descent
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror DescentNeural Information Processing Systems (NeurIPS), 2022
Zhiyuan Li
Tianhao Wang
Jason D. Lee
Sanjeev Arora
173
33
0
08 Jul 2022
Reconstructing Training Data from Trained Neural Networks
Reconstructing Training Data from Trained Neural NetworksNeural Information Processing Systems (NeurIPS), 2022
Niv Haim
Gal Vardi
Gilad Yehudai
Ohad Shamir
Michal Irani
221
160
0
15 Jun 2022
Understanding the Generalization Benefit of Normalization Layers:
  Sharpness Reduction
Understanding the Generalization Benefit of Normalization Layers: Sharpness ReductionNeural Information Processing Systems (NeurIPS), 2022
Kaifeng Lyu
Zhiyuan Li
Sanjeev Arora
FAtt
211
85
0
14 Jun 2022
Special Properties of Gradient Descent with Large Learning Rates
Special Properties of Gradient Descent with Large Learning RatesInternational Conference on Machine Learning (ICML), 2022
Amirkeivan Mohtashami
Martin Jaggi
Sebastian U. Stich
MLT
195
15
0
30 May 2022
Your Contrastive Learning Is Secretly Doing Stochastic Neighbor
  Embedding
Your Contrastive Learning Is Secretly Doing Stochastic Neighbor EmbeddingInternational Conference on Learning Representations (ICLR), 2022
Tianyang Hu
Zhili Liu
Fengwei Zhou
Wei Cao
Weiran Huang
SSL
166
29
0
30 May 2022
Learning to Reason with Neural Networks: Generalization, Unseen Data and
  Boolean Measures
Learning to Reason with Neural Networks: Generalization, Unseen Data and Boolean MeasuresNeural Information Processing Systems (NeurIPS), 2022
Emmanuel Abbe
Samy Bengio
Elisabetta Cornacchia
Jon M. Kleinberg
Aryo Lotfi
M. Raghu
Chiyuan Zhang
MLT
135
14
0
26 May 2022
On the Effective Number of Linear Regions in Shallow Univariate ReLU
  Networks: Convergence Guarantees and Implicit Bias
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit BiasNeural Information Processing Systems (NeurIPS), 2022
Itay Safran
Gal Vardi
Jason D. Lee
MLT
165
24
0
18 May 2022
The Mechanism of Prediction Head in Non-contrastive Self-supervised
  Learning
The Mechanism of Prediction Head in Non-contrastive Self-supervised LearningNeural Information Processing Systems (NeurIPS), 2022
Zixin Wen
Yuanzhi Li
SSL
255
40
0
12 May 2022
A Falsificationist Account of Artificial Neural Networks
A Falsificationist Account of Artificial Neural NetworksBritish Journal for the Philosophy of Science (BJPS), 2022
O. Buchholz
Eric Raidl
AI4CE
96
7
0
03 May 2022
Robust Training under Label Noise by Over-parameterization
Robust Training under Label Noise by Over-parameterizationInternational Conference on Machine Learning (ICML), 2022
Sheng Liu
Zhihui Zhu
Qing Qu
Chong You
NoLaOOD
153
123
0
28 Feb 2022
A Note on Machine Learning Approach for Computational Imaging
A Note on Machine Learning Approach for Computational Imaging
Bin Dong
123
0
0
24 Feb 2022
On Optimal Early Stopping: Over-informative versus Under-informative
  Parametrization
On Optimal Early Stopping: Over-informative versus Under-informative Parametrization
Ruoqi Shen
Liyao (Mars) Gao
Yi-An Ma
125
16
0
20 Feb 2022
A Data-Augmentation Is Worth A Thousand Samples: Exact Quantification
  From Analytical Augmented Sample Moments
A Data-Augmentation Is Worth A Thousand Samples: Exact Quantification From Analytical Augmented Sample Moments
Randall Balestriero
Ishan Misra
Yann LeCun
110
20
0
16 Feb 2022
Support Vectors and Gradient Dynamics of Single-Neuron ReLU Networks
Support Vectors and Gradient Dynamics of Single-Neuron ReLU Networks
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
111
0
0
11 Feb 2022
The Role of Linear Layers in Nonlinear Interpolating Networks
The Role of Linear Layers in Nonlinear Interpolating Networks
Greg Ongie
Rebecca Willett
138
18
0
02 Feb 2022
Implicit Regularization Towards Rank Minimization in ReLU Networks
Implicit Regularization Towards Rank Minimization in ReLU NetworksInternational Conference on Algorithmic Learning Theory (ALT), 2022
Nadav Timor
Gal Vardi
Ohad Shamir
165
59
0
30 Jan 2022
Limitation of Characterizing Implicit Regularization by Data-independent
  Functions
Limitation of Characterizing Implicit Regularization by Data-independent Functions
Leyang Zhang
Z. Xu
Yaoyu Zhang
Yaoyu Zhang
89
0
0
28 Jan 2022
Implicit Regularization in Hierarchical Tensor Factorization and Deep
  Convolutional Neural Networks
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural NetworksInternational Conference on Machine Learning (ICML), 2022
Noam Razin
Asaf Maman
Nadav Cohen
269
33
0
27 Jan 2022
More is Less: Inducing Sparsity via Overparameterization
More is Less: Inducing Sparsity via OverparameterizationInformation and Inference A Journal of the IMA (JIII), 2021
H. Chou
J. Maly
Holger Rauhut
264
28
0
21 Dec 2021
On the Regularization of Autoencoders
On the Regularization of Autoencoders
Harald Steck
Dario Garcia-Garcia
SSLAI4CE
107
4
0
21 Oct 2021
Implicit Bias of Linear Equivariant Networks
Implicit Bias of Linear Equivariant NetworksInternational Conference on Machine Learning (ICML), 2021
Hannah Lawrence
Kristian Georgiev
A. Dienes
B. Kiani
AI4CE
176
15
0
12 Oct 2021
An Unconstrained Layer-Peeled Perspective on Neural Collapse
An Unconstrained Layer-Peeled Perspective on Neural Collapse
Wenlong Ji
Yiping Lu
Yiliang Zhang
Zhun Deng
Weijie J. Su
335
92
0
06 Oct 2021
On Margin Maximization in Linear and ReLU Networks
On Margin Maximization in Linear and ReLU Networks
Gal Vardi
Ohad Shamir
Nathan Srebro
188
31
0
06 Oct 2021
The Benefits of Implicit Regularization from SGD in Least Squares
  Problems
The Benefits of Implicit Regularization from SGD in Least Squares ProblemsNeural Information Processing Systems (NeurIPS), 2021
Difan Zou
Jingfeng Wu
Vladimir Braverman
Quanquan Gu
Dean Phillips Foster
Sham Kakade
111
34
0
10 Aug 2021
Convergence of gradient descent for learning linear neural networks
Convergence of gradient descent for learning linear neural networksAdvances in Continuous and Discrete Models (ACDM), 2021
Gabin Maxime Nguegnang
Holger Rauhut
Ulrich Terstiege
MLT
175
24
0
04 Aug 2021
The loss landscape of deep linear neural networks: a second-order
  analysis
The loss landscape of deep linear neural networks: a second-order analysis
El Mehdi Achour
Franccois Malgouyres
Sébastien Gerchinovitz
ODL
154
17
0
28 Jul 2021
Continuous vs. Discrete Optimization of Deep Neural Networks
Continuous vs. Discrete Optimization of Deep Neural NetworksNeural Information Processing Systems (NeurIPS), 2021
Omer Elkabetz
Nadav Cohen
178
45
0
14 Jul 2021
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs
SGD: The Role of Implicit Regularization, Batch-size and Multiple-epochs
Satyen Kale
Ayush Sekhari
Karthik Sridharan
322
31
0
11 Jul 2021
A Theoretical Analysis of Fine-tuning with Linear Teachers
A Theoretical Analysis of Fine-tuning with Linear Teachers
Gal Shachaf
Alon Brutzkus
Amir Globerson
139
17
0
04 Jul 2021
Implicit Greedy Rank Learning in Autoencoders via Overparameterized
  Linear Networks
Implicit Greedy Rank Learning in Autoencoders via Overparameterized Linear Networks
Shih-Yu Sun
Vimal Thilak
Etai Littwin
Omid Saremi
J. Susskind
72
0
0
02 Jul 2021
Small random initialization is akin to spectral learning: Optimization
  and generalization guarantees for overparameterized low-rank matrix
  reconstruction
Small random initialization is akin to spectral learning: Optimization and generalization guarantees for overparameterized low-rank matrix reconstructionNeural Information Processing Systems (NeurIPS), 2021
Dominik Stöger
Mahdi Soltanolkotabi
ODL
293
85
0
28 Jun 2021
Implicit Regularization in Matrix Sensing via Mirror Descent
Implicit Regularization in Matrix Sensing via Mirror DescentNeural Information Processing Systems (NeurIPS), 2021
Fan Wu
Patrick Rebeschini
119
11
0
28 May 2021
Optimization Induced Equilibrium Networks
Optimization Induced Equilibrium Networks
Xingyu Xie
Qiuhao Wang
Zenan Ling
Xia Li
Yisen Wang
Guangcan Liu
Zhouchen Lin
139
9
0
27 May 2021
A Geometric Analysis of Neural Collapse with Unconstrained Features
A Geometric Analysis of Neural Collapse with Unconstrained FeaturesNeural Information Processing Systems (NeurIPS), 2021
Zhihui Zhu
Tianyu Ding
Jinxin Zhou
Xiao Li
Chong You
Jeremias Sulam
Qing Qu
188
223
0
06 May 2021
Implicit Regularization in Deep Tensor Factorization
Implicit Regularization in Deep Tensor FactorizationIEEE International Joint Conference on Neural Network (IJCNN), 2021
P. Milanesi
Hachem Kadri
Stéphane Ayache
Thierry Artières
133
9
0
04 May 2021
Comments on Leo Breiman's paper 'Statistical Modeling: The Two Cultures'
  (Statistical Science, 2001, 16(3), 199-231)
Comments on Leo Breiman's paper 'Statistical Modeling: The Two Cultures' (Statistical Science, 2001, 16(3), 199-231)
Jelena Bradic
Yinchu Zhu
59
0
0
21 Mar 2021
The Low-Rank Simplicity Bias in Deep Networks
The Low-Rank Simplicity Bias in Deep Networks
Minyoung Huh
H. Mobahi
Richard Y. Zhang
Brian Cheung
Pulkit Agrawal
Phillip Isola
231
134
0
18 Mar 2021
Experiments with Rich Regime Training for Deep Learning
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
101
2
0
26 Feb 2021
Inductive Bias of Multi-Channel Linear Convolutional Networks with
  Bounded Weight Norm
Inductive Bias of Multi-Channel Linear Convolutional Networks with Bounded Weight NormAnnual Conference Computational Learning Theory (COLT), 2021
Meena Jagadeesan
Ilya P. Razenshteyn
Suriya Gunasekar
164
21
0
24 Feb 2021
Implicit Regularization in Tensor Factorization
Implicit Regularization in Tensor FactorizationInternational Conference on Machine Learning (ICML), 2021
Noam Razin
Asaf Maman
Nadav Cohen
123
54
0
19 Feb 2021
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal
  Mirror Descent
On the Implicit Bias of Initialization Shape: Beyond Infinitesimal Mirror DescentInternational Conference on Machine Learning (ICML), 2021
Shahar Azulay
E. Moroshko
Mor Shpigel Nacson
Blake E. Woodworth
Nathan Srebro
Amir Globerson
Daniel Soudry
AI4CE
158
77
0
19 Feb 2021
Towards Resolving the Implicit Bias of Gradient Descent for Matrix
  Factorization: Greedy Low-Rank Learning
Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank LearningInternational Conference on Learning Representations (ICLR), 2020
Zhiyuan Li
Yuping Luo
Kaifeng Lyu
169
139
0
17 Dec 2020
Previous
123
Next