ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.11704
  4. Cited By
n-hot: Efficient bit-level sparsity for powers-of-two neural network
  quantization

n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

22 March 2021
Yuiko Sakuma
Hiroshi Sumihiro
Jun Nishikawa
Toshiki Nakamura
Ryoji Ikegaya
    MQ
ArXivPDFHTML

Papers citing "n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization"

2 / 2 papers shown
Title
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1