ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.11046
  4. Cited By
Scalable Methods for 8-bit Training of Neural Networks

Scalable Methods for 8-bit Training of Neural Networks

25 May 2018
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
    MQ
ArXivPDFHTML

Papers citing "Scalable Methods for 8-bit Training of Neural Networks"

50 / 167 papers shown
Title
Smoothed Differential Privacy
Smoothed Differential Privacy
Ao Liu
Yu-Xiang Wang
Lirong Xia
20
0
0
04 Jul 2021
LNS-Madam: Low-Precision Training in Logarithmic Number System using
  Multiplicative Weight Update
LNS-Madam: Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
Jiawei Zhao
Steve Dai
Rangharajan Venkatesan
Brian Zimmer
Mustafa Ali
Ming-Yu Liu
Brucek Khailany
B. Dally
Anima Anandkumar
MQ
31
13
0
26 Jun 2021
CD-SGD: Distributed Stochastic Gradient Descent with Compression and
  Delay Compensation
CD-SGD: Distributed Stochastic Gradient Descent with Compression and Delay Compensation
Enda Yu
Dezun Dong
Yemao Xu
Shuo Ouyang
Xiangke Liao
8
5
0
21 Jun 2021
Dynamic Clone Transformer for Efficient Convolutional Neural Netwoks
Dynamic Clone Transformer for Efficient Convolutional Neural Netwoks
Longqing Ye
ViT
17
0
0
12 Jun 2021
Towards Efficient Full 8-bit Integer DNN Online Training on
  Resource-limited Devices without Batch Normalization
Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization
Yukuan Yang
Xiaowei Chi
Lei Deng
Tianyi Yan
Feng Gao
Guoqi Li
MQ
23
6
0
27 May 2021
Lightweight Compression of Intermediate Neural Network Features for
  Collaborative Intelligence
Lightweight Compression of Intermediate Neural Network Features for Collaborative Intelligence
R. Cohen
Hyomin Choi
Ivan V. Bajić
16
23
0
15 May 2021
Lightweight compression of neural network feature tensors for
  collaborative intelligence
Lightweight compression of neural network feature tensors for collaborative intelligence
R. Cohen
Hyomin Choi
Ivan V. Bajić
19
42
0
12 May 2021
In-Hindsight Quantization Range Estimation for Quantized Training
In-Hindsight Quantization Range Estimation for Quantized Training
Marios Fournarakis
Markus Nagel
MQ
12
10
0
10 May 2021
InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks
InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks
Yonggan Fu
Zhongzhi Yu
Yongan Zhang
Yifan Jiang
Chaojian Li
Yongyuan Liang
Mingchao Jiang
Zhangyang Wang
Yingyan Lin
20
3
0
22 Apr 2021
"BNN - BN = ?": Training Binary Neural Networks without Batch
  Normalization
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization
Tianlong Chen
Zhenyu (Allen) Zhang
Xu Ouyang
Zechun Liu
Zhiqiang Shen
Zhangyang Wang
MQ
33
36
0
16 Apr 2021
All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and
  Memory-Efficient Inference of Deep Neural Networks
All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and Memory-Efficient Inference of Deep Neural Networks
Cheng-Wei Huang
Tim-Wei Chen
Juinn-Dar Huang
MQ
11
6
0
15 Apr 2021
Distributed Learning Systems with First-order Methods
Distributed Learning Systems with First-order Methods
Ji Liu
Ce Zhang
6
44
0
12 Apr 2021
Zero-shot Adversarial Quantization
Zero-shot Adversarial Quantization
Yuang Liu
Wei Zhang
Jun Wang
MQ
11
77
0
29 Mar 2021
RCT: Resource Constrained Training for Edge AI
RCT: Resource Constrained Training for Edge AI
Tian Huang
Tao Luo
Ming Yan
Joey Tianyi Zhou
Rick Siow Mong Goh
25
8
0
26 Mar 2021
n-hot: Efficient bit-level sparsity for powers-of-two neural network
  quantization
n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization
Yuiko Sakuma
Hiroshi Sumihiro
Jun Nishikawa
Toshiki Nakamura
Ryoji Ikegaya
MQ
35
1
0
22 Mar 2021
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
84
35
0
16 Feb 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
50
110
0
16 Feb 2021
Neural Network Compression for Noisy Storage Devices
Neural Network Compression for Noisy Storage Devices
Berivan Isik
Kristy Choi
Xin-Yang Zheng
Tsachy Weissman
Stefano Ermon
H. P. Wong
Armin Alaghi
18
13
0
15 Feb 2021
Distribution Adaptive INT8 Quantization for Training CNNs
Distribution Adaptive INT8 Quantization for Training CNNs
Kang Zhao
Sida Huang
Pan Pan
Yinghan Li
Yingya Zhang
Zhenyu Gu
Yinghui Xu
MQ
19
63
0
09 Feb 2021
Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators
Hamzah Abdel-Aziz
Ali Shafiee
J. Shin
A. Pedram
Joseph Hassoun
MQ
31
10
0
27 Jan 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
127
673
0
24 Jan 2021
SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and
  Training
SmartDeal: Re-Modeling Deep Network Weights for Efficient Inference and Training
Xiaohan Chen
Yang Katie Zhao
Yue Wang
Pengfei Xu
Haoran You
Chaojian Li
Y. Fu
Yingyan Lin
Zhangyang Wang
36
1
0
04 Jan 2021
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training
Y. Fu
Haoran You
Yang Katie Zhao
Yue Wang
Chaojian Li
K. Gopalakrishnan
Zhangyang Wang
Yingyan Lin
MQ
30
32
0
24 Dec 2020
Memory Optimization for Deep Networks
Memory Optimization for Deep Networks
Aashaka Shah
Chaoxia Wu
Jayashree Mohan
Vijay Chidambaram
Philipp Krahenbuhl
14
24
0
27 Oct 2020
A Statistical Framework for Low-bitwidth Training of Deep Neural
  Networks
A Statistical Framework for Low-bitwidth Training of Deep Neural Networks
Jianfei Chen
Yujie Gai
Z. Yao
Michael W. Mahoney
Joseph E. Gonzalez
MQ
12
58
0
27 Oct 2020
ShiftAddNet: A Hardware-Inspired Deep Network
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
73
76
0
24 Oct 2020
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit
  Neural Network Inference
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
Sanghyun Hong
Yigitcan Kaya
Ionut-Vlad Modoranu
Tudor Dumitras
AAML
12
69
0
06 Oct 2020
NITI: Training Integer Neural Networks Using Integer-only Arithmetic
NITI: Training Integer Neural Networks Using Integer-only Arithmetic
Maolin Wang
Seyedramin Rasoulinezhad
Philip H. W. Leong
Hayden Kwok-Hay So
MQ
18
39
0
28 Sep 2020
Rotated Binary Neural Network
Rotated Binary Neural Network
Mingbao Lin
Rongrong Ji
Zi-Han Xu
Baochang Zhang
Yan Wang
Yongjian Wu
Feiyue Huang
Chia-Wen Lin
MQ
16
129
0
28 Sep 2020
Normalization Techniques in Training DNNs: Methodology, Analysis and
  Application
Normalization Techniques in Training DNNs: Methodology, Analysis and Application
Lei Huang
Jie Qin
Yi Zhou
Fan Zhu
Li Liu
Ling Shao
AI4CE
10
254
0
27 Sep 2020
Low-Rank Training of Deep Neural Networks for Emerging Memory Technology
Low-Rank Training of Deep Neural Networks for Emerging Memory Technology
Albert Gural
P. Nadeau
M. Tikekar
B. Murmann
18
5
0
08 Sep 2020
An FPGA Accelerated Method for Training Feed-forward Neural Networks
  Using Alternating Direction Method of Multipliers and LSMR
An FPGA Accelerated Method for Training Feed-forward Neural Networks Using Alternating Direction Method of Multipliers and LSMR
Seyedeh Niusha Alavi Foumani
Ce Guo
Wayne Luk
8
3
0
06 Sep 2020
Optimal Quantization for Batch Normalization in Neural Network
  Deployments and Beyond
Optimal Quantization for Batch Normalization in Neural Network Deployments and Beyond
Dachao Lin
Peiqin Sun
Guangzeng Xie
Shuchang Zhou
Zhihua Zhang
MQ
6
2
0
30 Aug 2020
AQD: Towards Accurate Fully-Quantized Object Detection
AQD: Towards Accurate Fully-Quantized Object Detection
Peng Chen
Jing Liu
Bohan Zhuang
Mingkui Tan
Chunhua Shen
MQ
23
10
0
14 Jul 2020
Enabling On-Device CNN Training by Self-Supervised Instance Filtering
  and Error Map Pruning
Enabling On-Device CNN Training by Self-Supervised Instance Filtering and Error Map Pruning
Yawen Wu
Zhepeng Wang
Yiyu Shi
J. Hu
16
43
0
07 Jul 2020
Neural gradients are near-lognormal: improved quantized and sparse
  training
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
6
5
0
15 Jun 2020
Exploring the Potential of Low-bit Training of Convolutional Neural
  Networks
Exploring the Potential of Low-bit Training of Convolutional Neural Networks
Kai Zhong
Xuefei Ning
Guohao Dai
Zhenhua Zhu
Tianchen Zhao
Shulin Zeng
Yu Wang
Huazhong Yang
MQ
15
9
0
04 Jun 2020
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost
  Computation
SmartExchange: Trading Higher-cost Memory Storage/Access for Lower-cost Computation
Yang Katie Zhao
Xiaohan Chen
Yue Wang
Chaojian Li
Haoran You
Y. Fu
Yuan Xie
Zhangyang Wang
Yingyan Lin
MQ
32
43
0
07 May 2020
Dithered backprop: A sparse and quantized backpropagation algorithm for
  more efficient deep neural network training
Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training
Simon Wiedemann
Temesgen Mehari
Kevin Kepp
Wojciech Samek
14
18
0
09 Apr 2020
Shifted and Squeezed 8-bit Floating Point format for Low-Precision
  Training of Deep Neural Networks
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Léopold Cambier
Anahita Bhiwandiwalla
Ting Gong
M. Nekuii
Oguz H. Elibol
Hanlin Tang
MQ
19
48
0
16 Jan 2020
Sparse Weight Activation Training
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
32
72
0
07 Jan 2020
Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Fractional Skipping: Towards Finer-Grained Dynamic CNN Inference
Jianghao Shen
Y. Fu
Yue Wang
Pengfei Xu
Zhangyang Wang
Yingyan Lin
MQ
19
44
0
03 Jan 2020
Towards Unified INT8 Training for Convolutional Neural Network
Towards Unified INT8 Training for Convolutional Neural Network
Feng Zhu
Ruihao Gong
F. Yu
Xianglong Liu
Yanfei Wang
Zhelong Li
Xiuqi Yang
Junjie Yan
MQ
27
151
0
29 Dec 2019
Few Shot Network Compression via Cross Distillation
Few Shot Network Compression via Cross Distillation
Haoli Bai
Jiaxiang Wu
Irwin King
Michael Lyu
FedML
9
60
0
21 Nov 2019
Distributed Low Precision Training Without Mixed Precision
Distributed Low Precision Training Without Mixed Precision
Zehua Cheng
Weiyan Wang
Yan Pan
Thomas Lukasiewicz
MQ
18
5
0
18 Nov 2019
Adaptive Precision Training: Quantify Back Propagation in Neural Networks with Fixed-point Numbers
Xishan Zhang
Shaoli Liu
Rui Zhang
Chang-Shu Liu
Di Huang
...
Jiaming Guo
Yu Kang
Qi Guo
Zidong Du
Yunji Chen
MQ
13
6
0
01 Nov 2019
E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
E2-Train: Training State-of-the-art CNNs with Over 80% Energy Savings
Yue Wang
Ziyu Jiang
Xiaohan Chen
Pengfei Xu
Yang Katie Zhao
Yingyan Lin
Zhangyang Wang
MQ
13
84
0
29 Oct 2019
LeanConvNets: Low-cost Yet Effective Convolutional Neural Networks
LeanConvNets: Low-cost Yet Effective Convolutional Neural Networks
Jonathan Ephrath
Moshe Eliasof
Lars Ruthotto
E. Haber
Eran Treister
29
16
0
29 Oct 2019
OverQ: Opportunistic Outlier Quantization for Neural Network
  Accelerators
OverQ: Opportunistic Outlier Quantization for Neural Network Accelerators
Ritchie Zhao
Jordan Dotzel
Zhanqiu Hu
Preslav Ivanov
Christopher De Sa
Zhiru Zhang
MQ
19
1
0
13 Oct 2019
MLPerf Training Benchmark
MLPerf Training Benchmark
Arya D. McCarthy
Christine Cheng
Cody Coleman
Greg Diamos
Paulius Micikevicius
...
Carole-Jean Wu
Lingjie Xu
Masafumi Yamazaki
C. Young
Matei A. Zaharia
28
305
0
02 Oct 2019
Previous
1234
Next