Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2012.08749
Cited By
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
16 December 2020
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks"
8 / 8 papers shown
Title
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
M. E. Ildiz
Halil Alperen Gozeten
Ege Onur Taga
Marco Mondelli
Samet Oymak
51
2
0
24 Oct 2024
DSD
2
^2
2
: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
22
7
0
02 Mar 2023
Strong inductive biases provably prevent harmless interpolation
Michael Aerni
Marco Milanta
Konstantin Donhauser
Fanny Yang
25
9
0
18 Jan 2023
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
56
27
0
17 Jun 2022
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
11
70
0
06 Sep 2021
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
16
4
0
23 May 2021
Lottery Jackpots Exist in Pre-trained Models
Yu-xin Zhang
Mingbao Lin
Yan Wang
Fei Chao
Rongrong Ji
14
15
0
18 Apr 2021
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
17
93
0
02 Mar 2021
1