ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.10939
  4. Cited By
Optimal ridge penalty for real-world high-dimensional data can be zero
  or negative due to the implicit ridge regularization

Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization

28 May 2018
D. Kobak
Jonathan Lomond
Benoit Sanchez
ArXivPDFHTML

Papers citing "Optimal ridge penalty for real-world high-dimensional data can be zero or negative due to the implicit ridge regularization"

12 / 12 papers shown
Title
Learning the Regularization Strength for Deep Fine-Tuning via a Data-Emphasized Variational Objective
Learning the Regularization Strength for Deep Fine-Tuning via a Data-Emphasized Variational Objective
Ethan Harvey
Mikhail Petrov
Michael C. Hughes
45
0
0
28 Jan 2025
General Loss Functions Lead to (Approximate) Interpolation in High
  Dimensions
General Loss Functions Lead to (Approximate) Interpolation in High Dimensions
Kuo-Wei Lai
Vidya Muthukumar
26
5
0
13 Mar 2023
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Niladri S. Chatterji
Philip M. Long
20
8
0
19 Sep 2022
Target alignment in truncated kernel ridge regression
Target alignment in truncated kernel ridge regression
Arash A. Amini
R. Baumgartner
Dai Feng
14
3
0
28 Jun 2022
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
29
71
0
06 Sep 2021
Interpolation can hurt robust generalization even when there is no noise
Interpolation can hurt robust generalization even when there is no noise
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
31
14
0
05 Aug 2021
Slow-Growing Trees
Slow-Growing Trees
Philippe Goulet Coulombe
31
1
0
02 Mar 2021
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
37
93
0
04 Nov 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
39
172
0
23 Apr 2020
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
18
77
0
10 Dec 2019
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
34
144
0
13 Nov 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
47
626
0
14 Aug 2019
1