ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.13738
  4. Cited By
Pruning's Effect on Generalization Through the Lens of Training and
  Regularization

Pruning's Effect on Generalization Through the Lens of Training and Regularization

25 October 2022
Tian Jin
Michael Carbin
Daniel M. Roy
Jonathan Frankle
Gintare Karolina Dziugaite
ArXivPDFHTML

Papers citing "Pruning's Effect on Generalization Through the Lens of Training and Regularization"

24 / 24 papers shown
Title
As easy as PIE: understanding when pruning causes language models to disagree
As easy as PIE: understanding when pruning causes language models to disagree
Pietro Tropeano
Maria Maistro
Tuukka Ruotsalo
Christina Lioma
55
0
0
27 Mar 2025
CRoP: Context-wise Robust Static Human-Sensing Personalization
CRoP: Context-wise Robust Static Human-Sensing Personalization
Sawinder Kaur
Avery Gump
Jingyu Xin
Yi Xiao
Harshit Sharma
Nina R. Benway
J. Preston
Asif Salekin
24
0
0
26 Sep 2024
Are Sparse Neural Networks Better Hard Sample Learners?
Are Sparse Neural Networks Better Hard Sample Learners?
Q. Xiao
Boqian Wu
Lu Yin
Christopher Neil Gadzinski
Tianjin Huang
Mykola Pechenizkiy
D. Mocanu
33
1
0
13 Sep 2024
A Tighter Complexity Analysis of SparseGPT
A Tighter Complexity Analysis of SparseGPT
Xiaoyu Li
Yingyu Liang
Zhenmei Shi
Zhao-quan Song
66
21
0
22 Aug 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
40
3
0
19 Aug 2024
Mixture of Experts in a Mixture of RL settings
Mixture of Experts in a Mixture of RL settings
Timon Willi
J. Obando-Ceron
Jakob Foerster
Karolina Dziugaite
Pablo Samuel Castro
MoE
39
7
0
26 Jun 2024
Optimal Eye Surgeon: Finding Image Priors through Sparse Generators at
  Initialization
Optimal Eye Surgeon: Finding Image Priors through Sparse Generators at Initialization
Avrajit Ghosh
Xitong Zhang
Kenneth K. Sun
Qing Qu
S. Ravishankar
Rongrong Wang
MedIm
27
5
0
07 Jun 2024
Cyclic Sparse Training: Is it Enough?
Cyclic Sparse Training: Is it Enough?
Advait Gadhikar
Sree Harsha Nelaturu
R. Burkholz
CLL
33
0
0
04 Jun 2024
Insights into the Lottery Ticket Hypothesis and Iterative Magnitude
  Pruning
Insights into the Lottery Ticket Hypothesis and Iterative Magnitude Pruning
Tausifa Jan Saleem
Ramanjit Ahuja
Surendra Prasad
Brejesh Lall
16
0
0
22 Mar 2024
Pruning for Protection: Increasing Jailbreak Resistance in Aligned LLMs
  Without Fine-Tuning
Pruning for Protection: Increasing Jailbreak Resistance in Aligned LLMs Without Fine-Tuning
Adib Hasan
Ileana Rugina
Alex Wang
AAML
47
22
0
19 Jan 2024
Dataset Difficulty and the Role of Inductive Bias
Dataset Difficulty and the Role of Inductive Bias
Devin Kwok
Nikhil Anand
Jonathan Frankle
Gintare Karolina Dziugaite
David Rolnick
33
5
0
03 Jan 2024
The Truth is in There: Improving Reasoning in Language Models with
  Layer-Selective Rank Reduction
The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
Pratyusha Sharma
Jordan T. Ash
Dipendra Kumar Misra
LRM
11
77
0
21 Dec 2023
MGAS: Multi-Granularity Architecture Search for Trade-Off Between Model
  Effectiveness and Efficiency
MGAS: Multi-Granularity Architecture Search for Trade-Off Between Model Effectiveness and Efficiency
Xiaoyun Liu
Divya Saxena
Jiannong Cao
Yuqing Zhao
Penghui Ruan
21
0
0
23 Oct 2023
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates
  before In-Context Learning
The Cost of Down-Scaling Language Models: Fact Recall Deteriorates before In-Context Learning
Tian Jin
Nolan Clement
Xin Dong
Vaishnavh Nagarajan
Michael Carbin
Jonathan Ragan-Kelley
Gintare Karolina Dziugaite
LRM
30
5
0
07 Oct 2023
Benchmarking Adversarial Robustness of Compressed Deep Learning Models
Benchmarking Adversarial Robustness of Compressed Deep Learning Models
Brijesh Vora
Kartik Patwari
Syed Mahbub Hafiz
Zubair Shafiq
Chen-Nee Chuah
AAML
17
2
0
16 Aug 2023
Quantifying lottery tickets under label noise: accuracy, calibration,
  and complexity
Quantifying lottery tickets under label noise: accuracy, calibration, and complexity
V. Arora
Daniele Irto
Sebastian Goldt
G. Sanguinetti
19
2
0
21 Jun 2023
A Simple and Effective Pruning Approach for Large Language Models
A Simple and Effective Pruning Approach for Large Language Models
Mingjie Sun
Zhuang Liu
Anna Bair
J. Zico Kolter
50
353
0
20 Jun 2023
Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix
  Sketching
Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix Sketching
E. Guha
Prasanjit Dubey
X. Huo
MLT
19
1
0
30 May 2023
Neural Sculpting: Uncovering hierarchically modular task structure in
  neural networks through pruning and network analysis
Neural Sculpting: Uncovering hierarchically modular task structure in neural networks through pruning and network analysis
S. M. Patil
Loizos Michael
C. Dovrolis
21
0
0
28 May 2023
Effective Neural Network $L_0$ Regularization With BinMask
Effective Neural Network L0L_0L0​ Regularization With BinMask
Kai Jia
Martin Rinard
16
3
0
21 Apr 2023
Investigating the Impact of Model Width and Density on Generalization in
  Presence of Label Noise
Investigating the Impact of Model Width and Density on Generalization in Presence of Label Noise
Yihao Xue
Kyle Whitecross
Baharan Mirzasoleiman
NoLa
17
1
0
17 Aug 2022
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
178
1,027
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
Norm-Based Capacity Control in Neural Networks
Norm-Based Capacity Control in Neural Networks
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
114
577
0
27 Feb 2015
1