ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.04533
  4. Cited By
Exact expressions for double descent and implicit regularization via
  surrogate random design
v1v2v3 (latest)

Exact expressions for double descent and implicit regularization via surrogate random design

Neural Information Processing Systems (NeurIPS), 2019
10 December 2019
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
ArXiv (abs)PDFHTML

Papers citing "Exact expressions for double descent and implicit regularization via surrogate random design"

11 / 61 papers shown
On Uniform Convergence and Low-Norm Interpolation Learning
On Uniform Convergence and Low-Norm Interpolation LearningNeural Information Processing Systems (NeurIPS), 2020
Lijia Zhou
Danica J. Sutherland
Nathan Srebro
235
31
0
10 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear RegressionNeural Information Processing Systems (NeurIPS), 2020
Denny Wu
Ji Xu
344
140
0
10 Jun 2020
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
  Kernel, a Precise Phase Transition, and the Corresponding Double Descent
A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian Kernel, a Precise Phase Transition, and the Corresponding Double Descent
Zhenyu Liao
Romain Couillet
Michael W. Mahoney
214
96
0
09 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
345
113
0
05 Jun 2020
Determinantal Point Processes in Randomized Numerical Linear Algebra
Determinantal Point Processes in Randomized Numerical Linear Algebra
Michal Derezinski
Michael W. Mahoney
202
91
0
07 May 2020
Optimal Regularization Can Mitigate Double Descent
Optimal Regularization Can Mitigate Double DescentInternational Conference on Learning Representations (ICLR), 2020
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
264
147
0
04 Mar 2020
Improved guarantees and a multiple-descent curve for Column Subset
  Selection and the Nyström method
Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nyström method
Michal Derezinski
Rajiv Khanna
Michael W. Mahoney
300
10
0
21 Feb 2020
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite
  Networks
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite NetworksAAAI Conference on Artificial Intelligence (AAAI), 2020
Russell Tsuchida
Tim Pearce
Christopher van der Heide
Fred Roosta
M. Gallagher
291
10
0
20 Feb 2020
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
161
72
0
16 Dec 2019
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
364
232
0
02 Oct 2018
Optimal ridge penalty for real-world high-dimensional data can be zero
  or negative due to the implicit ridge regularization
Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization
D. Kobak
Jonathan Lomond
Benoit Sanchez
218
96
0
28 May 2018
Previous
12