Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.08271
Cited By
Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations
15 October 2021
Xinyu Zhang
Ian Colbert
Ken Kreutz-Delgado
Srinjoy Das
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations"
7 / 7 papers shown
Title
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
20
4
0
31 May 2024
An Energy-Efficient Edge Computing Paradigm for Convolution-based Image Upsampling
Ian Colbert
Ken Kreutz-Delgado
Srinjoy Das
27
4
0
15 Jul 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
125
526
0
31 Jan 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
118
508
0
24 Jan 2021
Universal Deep Neural Network Compression
Yoojin Choi
Mostafa El-Khamy
Jungwon Lee
MQ
76
81
0
07 Feb 2018
Image-to-Image Translation with Conditional Adversarial Networks
Phillip Isola
Jun-Yan Zhu
Tinghui Zhou
Alexei A. Efros
SSeg
203
7,006
0
21 Nov 2016
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
Wenzhe Shi
Jose Caballero
Ferenc Huszár
J. Totz
Andrew P. Aitken
Rob Bishop
Daniel Rueckert
Zehan Wang
SupR
177
4,748
0
16 Sep 2016
1