ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.09225
  4. Cited By
FP8 Quantization: The Power of the Exponent

FP8 Quantization: The Power of the Exponent

19 August 2022
Andrey Kuzmin
M. V. Baalen
Yuwei Ren
Markus Nagel
Jorn W. T. Peters
Tijmen Blankevoort
    MQ
ArXivPDFHTML

Papers citing "FP8 Quantization: The Power of the Exponent"

50 / 51 papers shown
Title
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs
Jiliang Ni
Jiachen Pu
Zhongyi Yang
Kun Zhou
Hui Wang
Xiaoliang Xiao
Dakui Wang
Xin Li
Jingfeng Luo
Conggang Hu
32
0
0
18 Apr 2025
Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices
Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices
Z. Wang
Yexiao He
Zheyu Shen
Yu Li
Guoheng Sun
Myungjin Lee
Ang Li
48
0
0
19 Mar 2025
FP4DiT: Towards Effective Floating Point Quantization for Diffusion Transformers
FP4DiT: Towards Effective Floating Point Quantization for Diffusion Transformers
Ruichen Chen
Keith G. Mills
Di Niu
MQ
52
0
0
19 Mar 2025
Wanda++: Pruning Large Language Models via Regional Gradients
Wanda++: Pruning Large Language Models via Regional Gradients
Yifan Yang
Kai Zhen
Bhavana Ganesh
Aram Galstyan
Goeric Huybrechts
...
S. Bodapati
Nathan Susanj
Zheng Zhang
Jack FitzGerald
Abhishek Kumar
59
0
0
06 Mar 2025
GREEN-CODE: Learning to Optimize Energy Efficiency in LLM-based Code Generation
GREEN-CODE: Learning to Optimize Energy Efficiency in LLM-based Code Generation
Shashikant Ilager
Lukas Florian Briem
Ivona Brandić
34
0
0
19 Jan 2025
Scaling Laws for Floating Point Quantization Training
Scaling Laws for Floating Point Quantization Training
X. Sun
Shuaipeng Li
Ruobing Xie
Weidong Han
Kan Wu
...
Yangyu Tao
Zhanhui Kang
C. Xu
Di Wang
Jie Jiang
MQ
AIFin
60
0
0
05 Jan 2025
TCAQ-DM: Timestep-Channel Adaptive Quantization for Diffusion Models
TCAQ-DM: Timestep-Channel Adaptive Quantization for Diffusion Models
Haocheng Huang
Jiaxin Chen
Jinyang Guo
Ruiyi Zhan
Yunhong Wang
DiffM
MQ
75
1
0
21 Dec 2024
Post-Training Non-Uniform Quantization for Convolutional Neural Networks
Post-Training Non-Uniform Quantization for Convolutional Neural Networks
Ahmed Luqman
Khuzemah Qazi
Imdadullah Khan
MQ
67
0
0
10 Dec 2024
LLM-Inference-Bench: Inference Benchmarking of Large Language Models on
  AI Accelerators
LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
Krishna Teja Chitty-Venkata
Siddhisanket Raskar
B. Kale
Farah Ferdaus
Aditya Tanikanti
Ken Raffenetti
Valerie Taylor
M. Emani
V. Vishwanath
39
7
0
31 Oct 2024
Demystifying Large Language Models for Medicine: A Primer
Demystifying Large Language Models for Medicine: A Primer
Qiao Jin
Nicholas Wan
Robert Leaman
Shubo Tian
Zhizheng Wang
...
Chunhua Weng
Ronald M. Summers
Qingyu Chen
Yifan Peng
Zhiyong Lu
LM&MA
40
3
0
24 Oct 2024
DAQ: Density-Aware Post-Training Weight-Only Quantization For LLMs
DAQ: Density-Aware Post-Training Weight-Only Quantization For LLMs
Yingsong Luo
Ling Chen
MQ
21
0
0
16 Oct 2024
Error Diffusion: Post Training Quantization with Block-Scaled Number
  Formats for Neural Networks
Error Diffusion: Post Training Quantization with Block-Scaled Number Formats for Neural Networks
Alireza Khodamoradi
K. Denolf
Eric Dellinger
MQ
29
0
0
15 Oct 2024
SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving
  Model Transformation
SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation
Aurick Qiao
Z. Yao
Samyam Rajbhandari
Yuxiong He
VLM
32
0
0
04 Oct 2024
Floating-floating point: a highly accurate number representation with
  flexible Counting ranges
Floating-floating point: a highly accurate number representation with flexible Counting ranges
Itamar Cohen
Gil Einziger
21
0
0
22 Sep 2024
Towards Narrowing the Generalization Gap in Deep Boolean Networks
Towards Narrowing the Generalization Gap in Deep Boolean Networks
Youngsung Kim
NAI
AI4CE
28
0
0
06 Sep 2024
Inference Optimizations for Large Language Models: Effects, Challenges,
  and Practical Considerations
Inference Optimizations for Large Language Models: Effects, Challenges, and Practical Considerations
Leo Donisch
Sigurd Schacht
Carsten Lanquillon
22
2
0
06 Aug 2024
Scalify: scale propagation for efficient low-precision LLM training
Scalify: scale propagation for efficient low-precision LLM training
Paul Balança
Sam Hosegood
Carlo Luschi
Andrew Fitzgibbon
24
2
0
24 Jul 2024
MCU-MixQ: A HW/SW Co-optimized Mixed-precision Neural Network Design
  Framework for MCUs
MCU-MixQ: A HW/SW Co-optimized Mixed-precision Neural Network Design Framework for MCUs
Junfeng Gong
Cheng Liu
Long Cheng
Huawei Li
Xiaowei Li
28
0
0
17 Jul 2024
Towards Federated Learning with On-device Training and Communication in
  8-bit Floating Point
Towards Federated Learning with On-device Training and Communication in 8-bit Floating Point
Bokun Wang
Axel Berg
D. A. E. Acar
Chuteng Zhou
FedML
MQ
39
0
0
02 Jul 2024
Learning from Students: Applying t-Distributions to Explore Accurate and
  Efficient Formats for LLMs
Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs
Jordan Dotzel
Yuzong Chen
Bahaa Kotb
Sushma Prasad
Gang Wu
Sheng R. Li
Mohamed S. Abdelfattah
Zhiru Zhang
26
8
0
06 May 2024
BASS: Batched Attention-optimized Speculative Sampling
BASS: Batched Attention-optimized Speculative Sampling
Haifeng Qian
Sujan Kumar Gonugondla
Sungsoo Ha
Mingyue Shang
Sanjay Krishna Gouda
Ramesh Nallapati
Sudipta Sengupta
Xiaofei Ma
Anoop Deoras
BDL
50
8
0
24 Apr 2024
Bifurcated Attention: Accelerating Massively Parallel Decoding with
  Shared Prefixes in LLMs
Bifurcated Attention: Accelerating Massively Parallel Decoding with Shared Prefixes in LLMs
Ben Athiwaratkun
Sujan Kumar Gonugondla
Sanjay Krishna Gouda
Haifeng Qian
Hantian Ding
...
Liangfu Chen
Parminder Bhatia
Ramesh Nallapati
Sudipta Sengupta
Bing Xiang
51
4
0
13 Mar 2024
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via
  Self-Distillation
BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation
Dayou Du
Yijia Zhang
Shijie Cao
Jiaqi Guo
Ting Cao
Xiaowen Chu
Ningyi Xu
MQ
44
29
0
16 Feb 2024
A Comprehensive Survey of Compression Algorithms for Language Models
A Comprehensive Survey of Compression Algorithms for Language Models
Seungcheol Park
Jaehyeon Choi
Sojin Lee
U. Kang
MQ
24
12
0
27 Jan 2024
Towards Cheaper Inference in Deep Networks with Lower Bit-Width
  Accumulators
Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators
Yaniv Blumenfeld
Itay Hubara
Daniel Soudry
37
3
0
25 Jan 2024
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric
  Strategy for Diverse Generative Tasks
ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks
Xiaoxia Wu
Haojun Xia
Stephen Youn
Zhen Zheng
Shiyang Chen
...
Reza Yazdani Aminabadi
Yuxiong He
Olatunji Ruwase
Leon Song
Zhewei Yao
66
8
0
14 Dec 2023
FP8-BERT: Post-Training Quantization for Transformer
FP8-BERT: Post-Training Quantization for Transformer
Jianwei Li
Tianchi Zhang
Ian En-Hsu Yen
Dongkuan Xu
MQ
10
5
0
10 Dec 2023
Shedding the Bits: Pushing the Boundaries of Quantization with
  Minifloats on FPGAs
Shedding the Bits: Pushing the Boundaries of Quantization with Minifloats on FPGAs
Shivam Aggarwal
Hans Jakob Damsgaard
Alessandro Pappalardo
Giuseppe Franco
Thomas B. Preußer
Michaela Blott
Tulika Mitra
MQ
19
5
0
21 Nov 2023
Advancing Transformer Architecture in Long-Context Large Language
  Models: A Comprehensive Survey
Advancing Transformer Architecture in Long-Context Large Language Models: A Comprehensive Survey
Yunpeng Huang
Jingwei Xu
Junyu Lai
Zixu Jiang
Taolue Chen
...
Xiaoxing Ma
Lijuan Yang
Zhou Xin
Shupeng Li
Penghao Zhao
LLMAG
KELM
31
54
0
21 Nov 2023
Low-Precision Floating-Point for Efficient On-Board Deep Neural Network
  Processing
Low-Precision Floating-Point for Efficient On-Board Deep Neural Network Processing
Cédric Gernigon
Silviu-Ioan Filip
Olivier Sentieys
Clément Coggiola
Mickael Bruno
MQ
11
7
0
18 Nov 2023
Efficient LLM Inference on CPUs
Efficient LLM Inference on CPUs
Haihao Shen
Hanwen Chang
Bo Dong
Yu Luo
Hengyu Meng
MQ
15
17
0
01 Nov 2023
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
LLM-FP4: 4-Bit Floating-Point Quantized Transformers
Shih-yang Liu
Zechun Liu
Xijie Huang
Pingcheng Dong
Kwang-Ting Cheng
MQ
19
56
0
25 Oct 2023
Training and inference of large language models using 8-bit floating
  point
Training and inference of large language models using 8-bit floating point
Sergio P. Perez
Yan Zhang
James Briggs
Charlie Blake
P. Krishnamurthy
Paul Balanca
Carlo Luschi
Stephen Barlow
Andrew William Fitzgibbon
MQ
24
18
0
29 Sep 2023
Efficient Post-training Quantization with FP8 Formats
Efficient Post-training Quantization with FP8 Formats
Haihao Shen
Naveen Mellempudi
Xin He
Q. Gao
Chang‐Bao Wang
Mengni Wang
MQ
23
19
0
26 Sep 2023
FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization
  Search
FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search
Jordan Dotzel
Gang Wu
Andrew Li
M. Umar
Yun Ni
...
Liqun Cheng
Martin G. Dixon
N. Jouppi
Quoc V. Le
Sheng R. Li
MQ
25
3
0
07 Aug 2023
ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization
  Using Floating-Point Formats
ZeroQuant-FP: A Leap Forward in LLMs Post-Training W4A8 Quantization Using Floating-Point Formats
Xiaoxia Wu
Z. Yao
Yuxiong He
MQ
27
43
0
19 Jul 2023
A Survey of Techniques for Optimizing Transformer Inference
A Survey of Techniques for Optimizing Transformer Inference
Krishna Teja Chitty-Venkata
Sparsh Mittal
M. Emani
V. Vishwanath
Arun Somani
35
62
0
16 Jul 2023
INT-FP-QSim: Mixed Precision and Formats For Large Language Models and
  Vision Transformers
INT-FP-QSim: Mixed Precision and Formats For Large Language Models and Vision Transformers
Lakshmi Nair
Mikhail Bernadskiy
Arulselvan Madhavan
Craig Chan
Ayon Basumallik
D. Bunandar
MQ
28
2
0
07 Jul 2023
Pruning vs Quantization: Which is Better?
Pruning vs Quantization: Which is Better?
Andrey Kuzmin
Markus Nagel
M. V. Baalen
Arash Behboodi
Tijmen Blankevoort
MQ
19
48
0
06 Jul 2023
Intriguing Properties of Quantization at Scale
Intriguing Properties of Quantization at Scale
Arash Ahmadian
Saurabh Dash
Hongyu Chen
Bharat Venkitesh
Stephen Gou
Phil Blunsom
A. Ustun
Sara Hooker
MQ
46
38
0
30 May 2023
Integer or Floating Point? New Outlooks for Low-Bit Quantization on
  Large Language Models
Integer or Floating Point? New Outlooks for Low-Bit Quantization on Large Language Models
Yijia Zhang
Lingran Zhao
Shijie Cao
Wenqiang Wang
Ting Cao
Fan Yang
Mao Yang
Shanghang Zhang
Ningyi Xu
MQ
25
17
0
21 May 2023
Outlier Suppression+: Accurate quantization of large language models by
  equivalent and optimal shifting and scaling
Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Xiuying Wei
Yunchen Zhang
Yuhang Li
Xiangguo Zhang
Ruihao Gong
Jian Ren
Zhengang Li
MQ
19
31
0
18 Apr 2023
FP8 versus INT8 for efficient deep learning inference
FP8 versus INT8 for efficient deep learning inference
M. V. Baalen
Andrey Kuzmin
Suparna S. Nair
Yuwei Ren
E. Mahurin
...
Sundar Subramanian
Sanghyuk Lee
Markus Nagel
Joseph B. Soriaga
Tijmen Blankevoort
MQ
23
44
0
31 Mar 2023
Unit Scaling: Out-of-the-Box Low-Precision Training
Unit Scaling: Out-of-the-Box Low-Precision Training
Charlie Blake
Douglas Orr
Carlo Luschi
MQ
22
7
0
20 Mar 2023
Rediscovering Hashed Random Projections for Efficient Quantization of
  Contextualized Sentence Embeddings
Rediscovering Hashed Random Projections for Efficient Quantization of Contextualized Sentence Embeddings
Ulf A. Hamster
Ji-Ung Lee
Alexander Geyken
Iryna Gurevych
16
0
0
13 Mar 2023
Full Stack Optimization of Transformer Inference: a Survey
Full Stack Optimization of Transformer Inference: a Survey
Sehoon Kim
Coleman Hooper
Thanakul Wattanawong
Minwoo Kang
Ruohan Yan
...
Qijing Huang
Kurt Keutzer
Michael W. Mahoney
Y. Shao
A. Gholami
MQ
28
100
0
27 Feb 2023
FP8 Formats for Deep Learning
FP8 Formats for Deep Learning
Paulius Micikevicius
Dusan Stosic
N. Burgess
Marius Cornea
Pradeep Dubey
...
Naveen Mellempudi
S. Oberman
M. Shoeybi
Michael Siu
Hao Wu
BDL
VLM
MQ
67
121
0
12 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
28
109
0
31 Aug 2022
Overcoming Oscillations in Quantization-Aware Training
Overcoming Oscillations in Quantization-Aware Training
Markus Nagel
Marios Fournarakis
Yelysei Bondarenko
Tijmen Blankevoort
MQ
108
100
0
21 Mar 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,950
0
20 Apr 2018
12
Next