ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.06588
  4. Cited By
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep
  Networks

Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks

19 January 2019
Charbel Sakr
Naigang Wang
Chia-Yu Chen
Jungwook Choi
A. Agrawal
Naresh R Shanbhag
K. Gopalakrishnan
    MQ
ArXivPDFHTML

Papers citing "Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks"

4 / 4 papers shown
Title
Towards Cheaper Inference in Deep Networks with Lower Bit-Width
  Accumulators
Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators
Yaniv Blumenfeld
Itay Hubara
Daniel Soudry
37
3
0
25 Jan 2024
Numerical Stability of DeepGOPlus Inference
Numerical Stability of DeepGOPlus Inference
Inés Gonzalez Pepe
Yohan Chatelain
Gregory Kiar
Tristan Glatard
BDL
19
2
0
13 Dec 2022
Compacting Deep Neural Networks for Internet of Things: Methods and
  Applications
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
16
38
0
20 Mar 2021
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
20
15
0
15 Oct 2020
1