Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1903.01061
Cited By
Learning low-precision neural networks without Straight-Through Estimator(STE)
4 March 2019
Z. G. Liu
Matthew Mattina
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning low-precision neural networks without Straight-Through Estimator(STE)"
9 / 9 papers shown
Title
Patch-wise Mixed-Precision Quantization of Vision Transformer
Junrui Xiao
Zhikai Li
Lianwei Yang
Qingyi Gu
MQ
27
12
0
11 May 2023
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
21
6
0
05 Dec 2022
Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Yushu Wu
Yifan Gong
Pu Zhao
Yanyu Li
Zheng Zhan
Wei Niu
Hao Tang
Minghai Qin
Bin Ren
Yanzhi Wang
SupR
MQ
27
23
0
25 Jul 2022
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffM
MQ
10
46
0
20 Apr 2021
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Kohei Yamamoto
MQ
22
63
0
12 Mar 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
124
673
0
24 Jan 2021
Efficient Residue Number System Based Winograd Convolution
Zhi-Gang Liu
Matthew Mattina
20
12
0
23 Jul 2020
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Manuele Rusci
Alessandro Capotondi
Luca Benini
MQ
11
74
0
30 May 2019
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
313
1,047
0
10 Feb 2017
1