ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.13331
  4. Cited By
Impact of classification difficulty on the weight matrices spectra in
  Deep Learning and application to early-stopping

Impact of classification difficulty on the weight matrices spectra in Deep Learning and application to early-stopping

26 November 2021
Xuran Meng
Jianfeng Yao
ArXivPDFHTML

Papers citing "Impact of classification difficulty on the weight matrices spectra in Deep Learning and application to early-stopping"

7 / 7 papers shown
Title
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
Sierra Wyllie
Ilia Shumailov
Nicolas Papernot
14
24
0
12 Mar 2024
Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
Xuanzhe Xiao
Zengyi Li
Chuanlong Xie
Fengwei Zhou
13
3
0
06 Apr 2023
Per-Example Gradient Regularization Improves Learning Signals from Noisy
  Data
Per-Example Gradient Regularization Improves Learning Signals from Noisy Data
Xuran Meng
Yuan Cao
Difan Zou
14
5
0
31 Mar 2023
Deep Learning Weight Pruning with RMT-SVD: Increasing Accuracy and
  Reducing Overfitting
Deep Learning Weight Pruning with RMT-SVD: Increasing Accuracy and Reducing Overfitting
Yitzchak Shmalo
Jonathan Jenkins
Oleksii Krupchytskyi
13
3
0
15 Mar 2023
Spectral Evolution and Invariance in Linear-width Neural Networks
Spectral Evolution and Invariance in Linear-width Neural Networks
Zhichao Wang
A. Engel
Anand D. Sarwate
Ioana Dumitriu
Tony Chiang
29
7
0
11 Nov 2022
Evaluating natural language processing models with generalization
  metrics that do not need access to any training or testing data
Evaluating natural language processing models with generalization metrics that do not need access to any training or testing data
Yaoqing Yang
Ryan Theisen
Liam Hodgkinson
Joseph E. Gonzalez
Kannan Ramchandran
Charles H. Martin
Michael W. Mahoney
72
17
0
06 Feb 2022
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,696
0
15 Sep 2016
1