ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.06160
  4. Cited By
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low
  Bitwidth Gradients

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients

20 June 2016
Shuchang Zhou
Yuxin Wu
Zekun Ni
Xinyu Zhou
He Wen
Yuheng Zou
    MQ
ArXivPDFHTML

Papers citing "DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients"

41 / 291 papers shown
Title
DFTerNet: Towards 2-bit Dynamic Fusion Networks for Accurate Human
  Activity Recognition
DFTerNet: Towards 2-bit Dynamic Fusion Networks for Accurate Human Activity Recognition
Zhan Yang
Osolo Ian Raymond
Chengyuan Zhang
Ying Wan
J. Long
CVBM
28
36
0
31 Jul 2018
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Jungwook Choi
P. Chuang
Zhuo Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
75
0
17 Jul 2018
Deep Learning for Semantic Segmentation on Minimal Hardware
Deep Learning for Semantic Segmentation on Minimal Hardware
S. V. Dijk
Marcus M. Scheunemann
SSeg
16
20
0
15 Jul 2018
FINN-L: Library Extensions and Design Trade-off Analysis for Variable
  Precision LSTM Networks on FPGAs
FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs
Vladimir Rybalkin
Alessandro Pappalardo
M. M. Ghaffar
Giulio Gambardella
Norbert Wehn
Michaela Blott
11
72
0
11 Jul 2018
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary
  Neural Network Inference
XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary Neural Network Inference
Francesco Conti
Pasquale Davide Schiavone
Luca Benini
19
108
0
09 Jul 2018
Error Compensated Quantized SGD and its Applications to Large-scale
  Distributed Optimization
Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization
Jiaxiang Wu
Weidong Huang
Junzhou Huang
Tong Zhang
11
233
0
21 Jun 2018
Binary Ensemble Neural Network: More Bits per Network or More Networks
  per Bit?
Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?
Shilin Zhu
Xin Dong
Hao Su
MQ
14
135
0
20 Jun 2018
Adding New Tasks to a Single Network with Weight Transformations using
  Binary Masks
Adding New Tasks to a Single Network with Weight Transformations using Binary Masks
Massimiliano Mancini
Elisa Ricci
Barbara Caputo
Samuel Rota Buló
17
51
0
28 May 2018
Accelerating CNN inference on FPGAs: A Survey
Accelerating CNN inference on FPGAs: A Survey
K. Abdelouahab
Maxime Pelcat
Jocelyn Serot
F. Berry
AI4CE
19
147
0
26 May 2018
Scalable Methods for 8-bit Training of Neural Networks
Scalable Methods for 8-bit Training of Neural Networks
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
MQ
25
329
0
25 May 2018
PACT: Parameterized Clipping Activation for Quantized Neural Networks
PACT: Parameterized Clipping Activation for Quantized Neural Networks
Jungwook Choi
Zhuo Wang
Swagath Venkataramani
P. Chuang
Vijayalakshmi Srinivasan
K. Gopalakrishnan
MQ
11
936
0
16 May 2018
Hu-Fu: Hardware and Software Collaborative Attack Framework against
  Neural Networks
Hu-Fu: Hardware and Software Collaborative Attack Framework against Neural Networks
Wenshuo Li
Jincheng Yu
Xuefei Ning
Pengjun Wang
Qi Wei
Yu Wang
Huazhong Yang
AAML
31
61
0
14 May 2018
Quantization Mimic: Towards Very Tiny CNN for Object Detection
Quantization Mimic: Towards Very Tiny CNN for Object Detection
Yi Wei
Xinyu Pan
Hongwei Qin
Wanli Ouyang
Junjie Yan
ObjD
22
88
0
06 May 2018
IGCV$2$: Interleaved Structured Sparse Convolutional Neural Networks
IGCV222: Interleaved Structured Sparse Convolutional Neural Networks
Guotian Xie
Jingdong Wang
Ting Zhang
Jianhuang Lai
Richang Hong
Guo-Jun Qi
10
104
0
17 Apr 2018
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory
Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory
Ameya Prabhu
Vishal Batchu
Rohit Gajawada
Sri Aurobindo Munagala
A. Namboodiri
MQ
25
18
0
11 Apr 2018
Deep Neural Network Compression with Single and Multiple Level
  Quantization
Deep Neural Network Compression with Single and Multiple Level Quantization
Yuhui Xu
Yongzhuang Wang
Aojun Zhou
Weiyao Lin
H. Xiong
MQ
20
114
0
06 Mar 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth
  Concurrency Analysis
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
24
701
0
26 Feb 2018
PBGen: Partial Binarization of Deconvolution-Based Generators for Edge
  Intelligence
PBGen: Partial Binarization of Deconvolution-Based Generators for Edge Intelligence
Jinglan Liu
Jiaxin Zhang
Yukun Ding
Xiaowei Xu
Meng-Long Jiang
Yiyu Shi
28
4
0
26 Feb 2018
Loss-aware Weight Quantization of Deep Networks
Loss-aware Weight Quantization of Deep Networks
Lu Hou
James T. Kwok
MQ
15
127
0
23 Feb 2018
BinaryRelax: A Relaxation Approach For Training Deep Neural Networks
  With Quantized Weights
BinaryRelax: A Relaxation Approach For Training Deep Neural Networks With Quantized Weights
Penghang Yin
Shuai Zhang
J. Lyu
Stanley Osher
Y. Qi
Jack Xin
MQ
22
78
0
19 Jan 2018
Fix your classifier: the marginal value of training the last weight
  layer
Fix your classifier: the marginal value of training the last weight layer
Elad Hoffer
Itay Hubara
Daniel Soudry
24
101
0
14 Jan 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
21
3,043
0
15 Dec 2017
Deep Gradient Compression: Reducing the Communication Bandwidth for
  Distributed Training
Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
Yujun Lin
Song Han
Huizi Mao
Yu Wang
W. Dally
28
1,384
0
05 Dec 2017
Deep Expander Networks: Efficient Deep Networks from Graph Theory
Deep Expander Networks: Efficient Deep Networks from Graph Theory
Ameya Prabhu
G. Varma
A. Namboodiri
GNN
27
70
0
23 Nov 2017
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural
  Networks with Reduced Numerical Precision Weights and Activation
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation
Moritz B. Milde
Daniel Neil
Alessandro Aimar
T. Delbruck
Giacomo Indiveri
MQ
27
9
0
13 Nov 2017
Attacking Binarized Neural Networks
Attacking Binarized Neural Networks
A. Galloway
Graham W. Taylor
M. Moussa
MQ
AAML
14
104
0
01 Nov 2017
Minimum Energy Quantized Neural Networks
Minimum Energy Quantized Neural Networks
Bert Moons
Koen Goetschalckx
Nick Van Berckelaer
Marian Verhelst
MQ
19
123
0
01 Nov 2017
Towards Effective Low-bitwidth Convolutional Neural Networks
Towards Effective Low-bitwidth Convolutional Neural Networks
Bohan Zhuang
Chunhua Shen
Mingkui Tan
Lingqiao Liu
Ian Reid
MQ
26
231
0
01 Nov 2017
Deep Learning as a Mixed Convex-Combinatorial Optimization Problem
Deep Learning as a Mixed Convex-Combinatorial Optimization Problem
A. Friesen
Pedro M. Domingos
18
20
0
31 Oct 2017
Gradient Sparsification for Communication-Efficient Distributed
  Optimization
Gradient Sparsification for Communication-Efficient Distributed Optimization
Jianqiao Wangni
Jialei Wang
Ji Liu
Tong Zhang
15
521
0
26 Oct 2017
Mixed Precision Training
Mixed Precision Training
Paulius Micikevicius
Sharan Narang
Jonah Alben
G. Diamos
Erich Elsen
...
Boris Ginsburg
Michael Houston
Oleksii Kuchaiev
Ganesh Venkatesh
Hao Wu
60
1,762
0
10 Oct 2017
Flexible Network Binarization with Layer-wise Priority
He Wang
Yi Tian Xu
Bingbing Ni
Hongteng Xu
MQ
23
10
0
13 Sep 2017
BitNet: Bit-Regularized Deep Neural Networks
BitNet: Bit-Regularized Deep Neural Networks
Aswin Raghavan
Mohamed R. Amer
S. Chai
Graham Taylor
MQ
27
10
0
16 Aug 2017
Streaming Architecture for Large-Scale Quantized Neural Networks on an
  FPGA-Based Dataflow Platform
Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform
Chaim Baskin
Natan Liss
Evgenii Zheltonozhskii
A. Bronstein
A. Mendelson
GNN
MQ
28
35
0
31 Jul 2017
SEP-Nets: Small and Effective Pattern Networks
SEP-Nets: Small and Effective Pattern Networks
Zhe Li
Xiaoyu Wang
Xutao Lv
Tianbao Yang
22
12
0
13 Jun 2017
Binarized Convolutional Landmark Localizers for Human Pose Estimation
  and Face Alignment with Limited Resources
Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources
Adrian Bulat
Georgios Tzimiropoulos
CVBM
3DV
24
190
0
02 Mar 2017
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
313
1,047
0
10 Feb 2017
Deep Learning with Low Precision by Half-wave Gaussian Quantization
Deep Learning with Low Precision by Half-wave Gaussian Quantization
Zhaowei Cai
Xiaodong He
Jian Sun
Nuno Vasconcelos
MQ
22
502
0
03 Feb 2017
Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point
Mixed Low-precision Deep Learning Inference using Dynamic Fixed Point
Naveen Mellempudi
Abhisek Kundu
Dipankar Das
Dheevatsa Mudigere
Bharat Kaul
MQ
27
30
0
31 Jan 2017
Trained Ternary Quantization
Trained Ternary Quantization
Chenzhuo Zhu
Song Han
Huizi Mao
W. Dally
MQ
51
1,035
0
04 Dec 2016
Training Bit Fully Convolutional Network for Fast Semantic Segmentation
Training Bit Fully Convolutional Network for Fast Semantic Segmentation
He Wen
Shuchang Zhou
Zhe Liang
Yuxiang Zhang
Dieqiao Feng
Xinyu Zhou
Cong Yao
MQ
SSeg
29
10
0
01 Dec 2016
Previous
123456