ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.08749
  4. Cited By
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks

Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks

16 December 2020
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
ArXivPDFHTML

Papers citing "Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks"

8 / 8 papers shown
Title
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
M. E. Ildiz
Halil Alperen Gozeten
Ege Onur Taga
Marco Mondelli
Samet Oymak
51
2
0
24 Oct 2024
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
22
7
0
02 Mar 2023
Strong inductive biases provably prevent harmless interpolation
Strong inductive biases provably prevent harmless interpolation
Michael Aerni
Marco Milanta
Konstantin Donhauser
Fanny Yang
25
9
0
18 Jan 2023
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
56
27
0
17 Jun 2022
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
11
70
0
06 Sep 2021
Spectral Pruning for Recurrent Neural Networks
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
16
4
0
23 May 2021
Lottery Jackpots Exist in Pre-trained Models
Lottery Jackpots Exist in Pre-trained Models
Yu-xin Zhang
Mingbao Lin
Yan Wang
Fei Chao
Rongrong Ji
14
15
0
18 Apr 2021
Label-Imbalanced and Group-Sensitive Classification under
  Overparameterization
Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
17
93
0
02 Mar 2021
1