Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.00084
Cited By
EPSD: Early Pruning with Self-Distillation for Efficient Model Compression
31 January 2024
Dong Chen
Ning Liu
Yichen Zhu
Zhengping Che
Rui Ma
Fachao Zhang
Xiaofeng Mou
Yi Chang
Jian Tang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"EPSD: Early Pruning with Self-Distillation for Efficient Model Compression"
7 / 7 papers shown
Title
Optimising TinyML with Quantization and Distillation of Transformer and Mamba Models for Indoor Localisation on Edge Devices
Thanaphon Suwannaphong
Ferdian Jovan
I. Craddock
Ryan McConville
Mamba
66
1
0
12 Dec 2024
LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment
Ge Yang
Changyi He
J. Guo
Jianyu Wu
Yifu Ding
Aishan Liu
Haotong Qin
Pengliang Ji
Xianglong Liu
MQ
31
4
0
28 Oct 2024
CP
3
^3
3
: Channel Pruning Plug-in for Point-based Networks
Yaomin Huang
Ning Liu
Zhengping Che
Zhiyuan Xu
Chaomin Shen
Yaxin Peng
Guixu Zhang
Xinmei Liu
Feifei Feng
Jian Tang
3DPC
31
14
0
23 Mar 2023
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
189
1,148
0
05 Oct 2021
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
38
27
0
30 Sep 2021
Distilling Knowledge via Knowledge Review
Pengguang Chen
Shu-Lin Liu
Hengshuang Zhao
Jiaya Jia
147
416
0
19 Apr 2021
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
1