ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.08011
  4. Cited By
Training Deep Neural Networks with 8-bit Floating Point Numbers

Training Deep Neural Networks with 8-bit Floating Point Numbers

19 December 2018
Naigang Wang
Jungwook Choi
D. Brand
Chia-Yu Chen
K. Gopalakrishnan
    MQ
ArXiv (abs)PDFHTML

Papers citing "Training Deep Neural Networks with 8-bit Floating Point Numbers"

12 / 212 papers shown
Title
5 Parallel Prism: A topology for pipelined implementations of
  convolutional neural networks using computational memory
5 Parallel Prism: A topology for pipelined implementations of convolutional neural networks using computational memory
M. Dazzi
Abu Sebastian
P. Francese
Thomas Parnell
Luca Benini
E. Eleftheriou
GNN
80
8
0
08 Jun 2019
Training large-scale ANNs on simulated resistive crossbar arrays
Training large-scale ANNs on simulated resistive crossbar arraysIEEE design & test (IEEE D&T), 2019
Malte J. Rasch
Tayfun Gokmen
W. Haensch
90
13
0
06 Jun 2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural
  Networks Under Hardware Fault Attacks
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault AttacksUSENIX Security Symposium (USENIX Security), 2019
Sanghyun Hong
Pietro Frigo
Yigitcan Kaya
Cristiano Giuffrida
Tudor Dumitras
AAML
138
233
0
03 Jun 2019
A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth
  Trade-Off
A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-OffNeural Information Processing Systems (NeurIPS), 2019
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
MQ
169
14
0
03 Jun 2019
Mixed Precision Training With 8-bit Floating Point
Mixed Precision Training With 8-bit Floating Point
Naveen Mellempudi
Sudarshan Srinivasan
Dipankar Das
Bharat Kaul
MQ
134
72
0
29 May 2019
SWALP : Stochastic Weight Averaging in Low-Precision Training
SWALP : Stochastic Weight Averaging in Low-Precision Training
Guandao Yang
Tianyi Zhang
Polina Kirichenko
Junwen Bai
A. Wilson
Christopher De Sa
177
102
0
26 Apr 2019
Digital Electronics and Analog Photonics for Convolutional Neural
  Networks (DEAP-CNNs)
Digital Electronics and Analog Photonics for Convolutional Neural Networks (DEAP-CNNs)
Viraj Bangari
Bicky A. Marquez
Heidi B. Miller
A. Tait
M. Nahmias
T. F. D. Lima
Hsuan-Tung Peng
Paul R. Prucnal
B. Shastri
107
175
0
23 Apr 2019
Distributed Deep Learning Strategies For Automatic Speech Recognition
Distributed Deep Learning Strategies For Automatic Speech Recognition
Wei Zhang
Xiaodong Cui
Ulrich Finkler
Brian Kingsbury
G. Saon
David S. Kung
M. Picheny
112
30
0
10 Apr 2019
CodeNet: Training Large Scale Neural Networks in Presence of Soft-Errors
CodeNet: Training Large Scale Neural Networks in Presence of Soft-Errors
Sanghamitra Dutta
Ziqian Bai
Tze Meng Low
P. Grover
174
18
0
04 Mar 2019
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep
  Networks
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Charbel Sakr
Naigang Wang
Chia-Yu Chen
Jungwook Choi
A. Agrawal
Naresh R Shanbhag
K. Gopalakrishnan
MQ
177
34
0
19 Jan 2019
A note on solving nonlinear optimization problems in variable precision
A note on solving nonlinear optimization problems in variable precision
Serge Gratton
P. Toint
94
16
0
09 Dec 2018
Compact and Computationally Efficient Representation of Deep Neural
  Networks
Compact and Computationally Efficient Representation of Deep Neural Networks
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
172
74
0
27 May 2018
Previous
12345