ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.10455
  4. Cited By
What Do Neural Networks Learn When Trained With Random Labels?

What Do Neural Networks Learn When Trained With Random Labels?

18 June 2020
Hartmut Maennel
Ibrahim M. Alabdulmohsin
Ilya O. Tolstikhin
R. Baldock
Olivier Bousquet
Sylvain Gelly
Daniel Keysers
    FedML
ArXivPDFHTML

Papers citing "What Do Neural Networks Learn When Trained With Random Labels?"

18 / 18 papers shown
Title
Partitioned Neural Network Training via Synthetic Intermediate Labels
Partitioned Neural Network Training via Synthetic Intermediate Labels
C. V. Karadag
Nezih Topaloglu
34
0
0
17 Mar 2024
In Search of a Data Transformation That Accelerates Neural Field
  Training
In Search of a Data Transformation That Accelerates Neural Field Training
Junwon Seo
Sangyoon Lee
Kwang In Kim
Jaeho Lee
33
3
0
28 Nov 2023
Leveraging Unlabeled Data to Track Memorization
Leveraging Unlabeled Data to Track Memorization
Mahsa Forouzesh
Hanie Sedghi
Patrick Thiran
NoLa
TDI
30
3
0
08 Dec 2022
On Robust Learning from Noisy Labels: A Permutation Layer Approach
On Robust Learning from Noisy Labels: A Permutation Layer Approach
Salman Alsubaihi
Mohammed Alkhrashi
Raied Aljadaany
Fahad Albalawi
Bernard Ghanem
NoLa
13
0
0
29 Nov 2022
Layer-Stack Temperature Scaling
Layer-Stack Temperature Scaling
Amr Khalifa
Michael C. Mozer
Hanie Sedghi
Behnam Neyshabur
Ibrahim M. Alabdulmohsin
75
2
0
18 Nov 2022
The Curious Case of Benign Memorization
The Curious Case of Benign Memorization
Sotiris Anagnostidis
Gregor Bachmann
Lorenzo Noci
Thomas Hofmann
AAML
43
8
0
25 Oct 2022
Stabilizing Off-Policy Deep Reinforcement Learning from Pixels
Stabilizing Off-Policy Deep Reinforcement Learning from Pixels
Edoardo Cetin
Philip J. Ball
Steve Roberts
Oya Celiktutan
30
36
0
03 Jul 2022
What do CNNs Learn in the First Layer and Why? A Linear Systems
  Perspective
What do CNNs Learn in the First Layer and Why? A Linear Systems Perspective
Rhea Chowers
Yair Weiss
31
2
0
06 Jun 2022
A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
  Learning
A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning
Da-Wei Zhou
Qiwen Wang
Han-Jia Ye
De-Chuan Zhan
19
122
0
26 May 2022
Composing General Audio Representation by Fusing Multilayer Features of
  a Pre-trained Model
Composing General Audio Representation by Fusing Multilayer Features of a Pre-trained Model
Daisuke Niizumi
Daiki Takeuchi
Yasunori Ohishi
N. Harada
K. Kashino
16
5
0
17 May 2022
Regularization by Misclassification in ReLU Neural Networks
Regularization by Misclassification in ReLU Neural Networks
Elisabetta Cornacchia
Jan Hązła
Ido Nachum
Amir Yehudayoff
NoLa
20
2
0
03 Nov 2021
On the Impact of Stable Ranks in Deep Nets
On the Impact of Stable Ranks in Deep Nets
B. Georgiev
L. Franken
Mayukh Mukherjee
Georgios Arvanitidis
13
3
0
05 Oct 2021
Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering
Rethinking Graph Auto-Encoder Models for Attributed Graph Clustering
Nairouz Mrabah
Mohamed Bouguessa
M. Touati
Riadh Ksantini
28
62
0
19 Jul 2021
How to Train Your Wide Neural Network Without Backprop: An Input-Weight
  Alignment Perspective
How to Train Your Wide Neural Network Without Backprop: An Input-Weight Alignment Perspective
Akhilan Boopathy
Ila Fiete
16
9
0
15 Jun 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
21
43
0
12 Jun 2021
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
  Self-supervised Neural Architecture Search
BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Changlin Li
Tao Tang
Guangrun Wang
Jiefeng Peng
Bing Wang
Xiaodan Liang
Xiaojun Chang
ViT
46
105
0
23 Mar 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,888
0
15 Sep 2016
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
119
577
0
27 Feb 2015
1