Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2310.02277
Cited By
v1
v2
v3 (latest)
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs
International Conference on Machine Learning (ICML), 2023
29 September 2023
Lu Yin
Ajay Jaiswal
Shiwei Liu
Souvik Kundu
Zinan Lin
Re-assign community
ArXiv (abs)
PDF
HTML
Github (16★)
Papers citing
"Junk DNA Hypothesis: Pruning Small Pre-Trained Weights Irreversibly and Monotonically Impairs "Difficult" Downstream Tasks in LLMs"
5 / 5 papers shown
Confident magnitude-based neural network pruning
Joaquin Alvarez
177
1
0
08 Aug 2024
Q-S5: Towards Quantized State Space Models
Steven Abreu
Jens Egholm Pedersen
Kade Heckel
Alessandro Pierro
MQ
284
14
0
13 Jun 2024
Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices
Ruiyang Qin
Dancheng Liu
Zheyu Yan
Zhaoxuan Tan
Zixuan Pan
Zhenge Jia
Meng Jiang
Ahmed Abbasi
Jinjun Xiong
Yiyu Shi
269
27
0
06 Jun 2024
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping
Ajay Jaiswal
Bodun Hu
Lu Yin
Yeonju Ro
Shiwei Liu
Tianlong Chen
Aditya Akella
294
21
0
05 Apr 2024
Model Compression and Efficient Inference for Large Language Models: A Survey
Wenxiao Wang
Wei Chen
Yicong Luo
Yongliu Long
Zhengkai Lin
Liye Zhang
Binbin Lin
Deng Cai
Xiaofei He
MQ
298
88
0
15 Feb 2024
1