Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2010.01892
Cited By
Joint Pruning & Quantization for Extremely Sparse Neural Networks
5 October 2020
Po-Hsiang Yu
Sih-Sian Wu
Jan P. Klopp
Liang-Gee Chen
Shao-Yi Chien
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Joint Pruning & Quantization for Extremely Sparse Neural Networks"
5 / 5 papers shown
Title
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
44
5
0
31 May 2024
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
24
6
0
05 Dec 2022
Efficient Quantized Sparse Matrix Operations on Tensor Cores
Shigang Li
Kazuki Osawa
Torsten Hoefler
76
31
0
14 Sep 2022
Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations
Xinyu Zhang
Ian Colbert
Ken Kreutz-Delgado
Srinjoy Das
MQ
29
11
0
15 Oct 2021
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
319
1,049
0
10 Feb 2017
1