ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12322
  4. Cited By
A Study of BFLOAT16 for Deep Learning Training

A Study of BFLOAT16 for Deep Learning Training

29 May 2019
Dhiraj D. Kalamkar
Dheevatsa Mudigere
Naveen Mellempudi
Dipankar Das
K. Banerjee
Sasikanth Avancha
Dharma Teja Vooturi
Nataraj Jammalamadaka
Jianyu Huang
Hector Yuen
Jiyan Yang
Jongsoo Park
A. Heinecke
E. Georganas
Sudarshan Srinivasan
Abhisek Kundu
M. Smelyanskiy
Bharat Kaul
Pradeep Dubey
    MQ
ArXivPDFHTML

Papers citing "A Study of BFLOAT16 for Deep Learning Training"

50 / 53 papers shown
Title
70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float
Tianyi Zhang
Yang Sui
Shaochen Zhong
V. Chaudhary
Xia Hu
Anshumali Shrivastava
MQ
32
1
0
15 Apr 2025
Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in Large Language Models
Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in Large Language Models
Teng Wang
Zhangyi Jiang
Zhenqi He
Wenhan Yang
Yanan Zheng
Zeyu Li
Zifan He
Shenyang Tong
Hailei Gong
LRM
90
2
0
16 Mar 2025
Advancing MAPF towards the Real World: A Scalable Multi-Agent Realistic Testbed (SMART)
Jingtian Yan
Zhifei Li
William Kang
Yulun Zhang
Stephen Smith
Jiaoyang Li
48
0
0
03 Mar 2025
Stochastic Rounding for LLM Training: Theory and Practice
Stochastic Rounding for LLM Training: Theory and Practice
Kaan Ozkara
Tao Yu
Youngsuk Park
43
0
0
27 Feb 2025
Vision-LSTM: xLSTM as Generic Vision Backbone
Vision-LSTM: xLSTM as Generic Vision Backbone
Benedikt Alkin
M. Beck
Korbinian Poppel
Sepp Hochreiter
Johannes Brandstetter
VLM
69
43
0
24 Feb 2025
Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam
Stable-SPAM: How to Train in 4-Bit More Stably than 16-Bit Adam
Tianjin Huang
Haotian Hu
Zhenyu Zhang
Gaojie Jin
Xianrui Li
...
Tianlong Chen
Lu Liu
Qingsong Wen
Zhangyang Wang
Shiwei Liu
MQ
43
0
0
24 Feb 2025
GoRA: Gradient-driven Adaptive Low Rank Adaptation
GoRA: Gradient-driven Adaptive Low Rank Adaptation
Haonan He
Peng Ye
Yuchen Ren
Yuan Yuan
Lei Chen
AI4TS
AI4CE
238
0
0
13 Feb 2025
Democratizing AI: Open-source Scalable LLM Training on GPU-based Supercomputers
Democratizing AI: Open-source Scalable LLM Training on GPU-based Supercomputers
Siddharth Singh
Prajwal Singhania
Aditya K. Ranjan
John Kirchenbauer
Jonas Geiping
...
Abhimanyu Hans
Manli Shu
Aditya Tomar
Tom Goldstein
A. Bhatele
105
2
0
12 Feb 2025
Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM
Steel-LLM:From Scratch to Open Source -- A Personal Journey in Building a Chinese-Centric LLM
Qingshui Gu
Shu Li
Tianyu Zheng
Zhaoxiang Zhang
281
0
0
10 Feb 2025
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs
Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs
Nicolas Boizard
Kevin El Haddad
C´eline Hudelot
Pierre Colombo
83
15
0
28 Jan 2025
COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training
COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training
Haocheng Xi
Han Cai
Ligeng Zhu
Yaojie Lu
Kurt Keutzer
Jianfei Chen
Song Han
MQ
75
9
0
25 Oct 2024
ToW: Thoughts of Words Improve Reasoning in Large Language Models
ToW: Thoughts of Words Improve Reasoning in Large Language Models
Zhikun Xu
Ming shen
Jacob Dineen
Zhaonan Li
Xiao Ye
Shijie Lu
Aswin Rrv
Chitta Baral
Ben Zhou
LRM
223
1
0
21 Oct 2024
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision
  Models
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Byung-Kwan Lee
Chae Won Kim
Beomchan Park
Yonghyun Ro
MLLM
LRM
48
19
0
24 May 2024
Granite Code Models: A Family of Open Foundation Models for Code
  Intelligence
Granite Code Models: A Family of Open Foundation Models for Code Intelligence
Mayank Mishra
Matt Stallone
Gaoyuan Zhang
Songlin Yang
Aditya Prasad
...
Amith Singhee
Nirmit Desai
David D. Cox
Ruchir Puri
Yikang Shen
AI4TS
63
58
0
07 May 2024
Q-Newton: Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent
Q-Newton: Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent
Pingzhi Li
Junyu Liu
Hanrui Wang
Tianlong Chen
96
1
0
30 Apr 2024
Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem
Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem
Dorjan Hitaj
Giulio Pagnotta
Fabio De Gaspari
Sediola Ruko
Briland Hitaj
Luigi V. Mancini
Fernando Perez-Cruz
42
4
0
06 Mar 2024
SInViG: A Self-Evolving Interactive Visual Agent for Human-Robot
  Interaction
SInViG: A Self-Evolving Interactive Visual Agent for Human-Robot Interaction
Jie Xu
Hanbo Zhang
Xinghang Li
Huaping Liu
Xuguang Lan
Tao Kong
LM&Ro
38
3
0
19 Feb 2024
Training and inference of large language models using 8-bit floating
  point
Training and inference of large language models using 8-bit floating point
Sergio P. Perez
Yan Zhang
James Briggs
Charlie Blake
Prashanth Krishnamurthy
Paul Balanca
Carlo Luschi
Stephen Barlow
Andrew William Fitzgibbon
MQ
39
18
0
29 Sep 2023
SummaryMixing: A Linear-Complexity Alternative to Self-Attention for
  Speech Recognition and Understanding
SummaryMixing: A Linear-Complexity Alternative to Self-Attention for Speech Recognition and Understanding
Titouan Parcollet
Rogier van Dalen
Shucong Zhang
S. Bhattacharya
28
6
0
12 Jul 2023
Reduced Precision Floating-Point Optimization for Deep Neural Network
  On-Device Learning on MicroControllers
Reduced Precision Floating-Point Optimization for Deep Neural Network On-Device Learning on MicroControllers
D. Nadalini
Manuele Rusci
Luca Benini
Francesco Conti
31
15
0
30 May 2023
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
Contrastive Tuning: A Little Help to Make Masked Autoencoders Forget
Johannes Lehner
Benedikt Alkin
Andreas Fürst
Elisabeth Rumetshofer
Lukas Miklautz
Sepp Hochreiter
36
18
0
20 Apr 2023
Unit Scaling: Out-of-the-Box Low-Precision Training
Unit Scaling: Out-of-the-Box Low-Precision Training
Charlie Blake
Douglas Orr
Carlo Luschi
MQ
24
7
0
20 Mar 2023
With Shared Microexponents, A Little Shifting Goes a Long Way
With Shared Microexponents, A Little Shifting Goes a Long Way
Bita Darvish Rouhani
Ritchie Zhao
V. Elango
Rasoul Shafipour
Mathew Hall
...
Eric S. Chung
Zhaoxia Deng
S. Naghshineh
Jongsoo Park
Maxim Naumov
MQ
43
38
0
16 Feb 2023
The Hidden Power of Pure 16-bit Floating-Point Neural Networks
The Hidden Power of Pure 16-bit Floating-Point Neural Networks
Juyoung Yun
Byungkon Kang
Zhoulai Fu
MQ
26
1
0
30 Jan 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
21
1
0
15 Jan 2023
Numerical Stability of DeepGOPlus Inference
Numerical Stability of DeepGOPlus Inference
Inés Gonzalez Pepe
Yohan Chatelain
Gregory Kiar
Tristan Glatard
BDL
24
2
0
13 Dec 2022
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
BigScience Workshop
:
Teven Le Scao
Angela Fan
Christopher Akiki
...
Zhongli Xie
Zifan Ye
M. Bras
Younes Belkada
Thomas Wolf
VLM
121
2,319
0
09 Nov 2022
Defending with Errors: Approximate Computing for Robustness of Deep
  Neural Networks
Defending with Errors: Approximate Computing for Robustness of Deep Neural Networks
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
OOD
30
2
0
02 Nov 2022
Multi-lingual Evaluation of Code Generation Models
Multi-lingual Evaluation of Code Generation Models
Ben Athiwaratkun
Sanjay Krishna Gouda
Zijian Wang
Xiaopeng Li
Yuchen Tian
...
Baishakhi Ray
Parminder Bhatia
Sudipta Sengupta
Dan Roth
Bing Xiang
ELM
120
161
0
26 Oct 2022
Precision Machine Learning
Precision Machine Learning
Eric J. Michaud
Ziming Liu
Max Tegmark
24
34
0
24 Oct 2022
OLLA: Optimizing the Lifetime and Location of Arrays to Reduce the
  Memory Usage of Neural Networks
OLLA: Optimizing the Lifetime and Location of Arrays to Reduce the Memory Usage of Neural Networks
Benoit Steiner
Mostafa Elhoushi
Jacob Kahn
James Hegarty
31
8
0
24 Oct 2022
FP8 Formats for Deep Learning
FP8 Formats for Deep Learning
Paulius Micikevicius
Dusan Stosic
N. Burgess
Marius Cornea
Pradeep Dubey
...
Naveen Mellempudi
S. Oberman
M. Shoeybi
Michael Siu
Hao Wu
BDL
VLM
MQ
77
126
0
12 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
33
109
0
31 Aug 2022
Productivity meets Performance: Julia on A64FX
Productivity meets Performance: Julia on A64FX
Mosè Giordano
Milan Klower
Valentin Churavy
24
11
0
26 Jul 2022
8-bit Numerical Formats for Deep Neural Networks
8-bit Numerical Formats for Deep Neural Networks
Badreddine Noune
Philip Jones
Daniel Justus
Dominic Masters
Carlo Luschi
MQ
23
34
0
06 Jun 2022
KERPLE: Kernelized Relative Positional Embedding for Length
  Extrapolation
KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Ta-Chung Chi
Ting-Han Fan
Peter J. Ramadge
Alexander I. Rudnicky
49
65
0
20 May 2022
FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours
FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours
Shenggan Cheng
Xuanlei Zhao
Guangyang Lu
Bin-Rui Li
Zhongming Yu
Tian Zheng
R. Wu
Xiwen Zhang
Jian Peng
Yang You
AI4CE
27
30
0
02 Mar 2022
Vau da muntanialas: Energy-efficient multi-die scalable acceleration of
  RNN inference
Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference
G. Paulin
Francesco Conti
Lukas Cavigelli
Luca Benini
24
8
0
14 Feb 2022
Energy awareness in low precision neural networks
Energy awareness in low precision neural networks
Nurit Spingarn-Eliezer
Ron Banner
Elad Hoffer
Hilla Ben-Yaacov
T. Michaeli
41
0
0
06 Feb 2022
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network
  Accelerators
EcoFlow: Efficient Convolutional Dataflows for Low-Power Neural Network Accelerators
Lois Orosa
Skanda Koppula
Yaman Umuroglu
Konstantinos Kanellopoulos
Juan Gómez Luna
Michaela Blott
K. Vissers
O. Mutlu
46
4
0
04 Feb 2022
Whole Brain Segmentation with Full Volume Neural Network
Whole Brain Segmentation with Full Volume Neural Network
Yeshu Li
Jianwei Cui
Yilun Sheng
Xiao Liang
Jingdong Wang
E. Chang
Yan Xu
32
11
0
29 Oct 2021
A TinyML Platform for On-Device Continual Learning with Quantized Latent
  Replays
A TinyML Platform for On-Device Continual Learning with Quantized Latent Replays
Leonardo Ravaglia
Manuele Rusci
D. Nadalini
Alessandro Capotondi
Francesco Conti
Luca Benini
BDL
41
64
0
20 Oct 2021
Reducing numerical precision preserves classification accuracy in
  Mondrian Forests
Reducing numerical precision preserves classification accuracy in Mondrian Forests
Marc Vicuna
Martin Khannouz
Gregory Kiar
Yohan Chatelain
Tristan Glatard
MQ
19
3
0
28 Jun 2021
Mixed-Precision Embedding Using a Cache
Mixed-Precision Embedding Using a Cache
J. Yang
Jianyu Huang
Jongsoo Park
P. T. P. Tang
Andrew Tulloch
27
36
0
21 Oct 2020
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML
  Models: A Survey and Insights
Hardware Acceleration of Sparse and Irregular Tensor Computations of ML Models: A Survey and Insights
Shail Dave
Riyadh Baghdadi
Tony Nowatzki
Sasikanth Avancha
Aviral Shrivastava
Baoxin Li
64
82
0
02 Jul 2020
Defensive Approximation: Securing CNNs using Approximate Computing
Defensive Approximation: Securing CNNs using Approximate Computing
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
19
37
0
13 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Optimizing Deep Learning Recommender Systems' Training On CPU Cluster
  Architectures
Optimizing Deep Learning Recommender Systems' Training On CPU Cluster Architectures
Dhiraj D. Kalamkar
E. Georganas
Sudarshan Srinivasan
Jianping Chen
Mikhail Shiryaev
A. Heinecke
56
48
0
10 May 2020
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Reducing Data Motion to Accelerate the Training of Deep Neural Networks
Sicong Zhuang
Cristiano Malossi
Marc Casas
24
0
0
05 Apr 2020
Shifted and Squeezed 8-bit Floating Point format for Low-Precision
  Training of Deep Neural Networks
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Léopold Cambier
Anahita Bhiwandiwalla
Ting Gong
M. Nekuii
Oguz H. Elibol
Hanlin Tang
MQ
23
48
0
16 Jan 2020
12
Next