Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2103.11704
Cited By
n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
22 March 2021
Yuiko Sakuma
Hiroshi Sumihiro
Jun Nishikawa
Toshiki Nakamura
Ryoji Ikegaya
MQ
Re-assign community
ArXiv
PDF
HTML
Papers citing
"n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization"
2 / 2 papers shown
Title
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1