ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11594
106
15

SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training

16 May 2025
Jintao Zhang
Jia Wei
Pengle Zhang
Xiaoming Xu
Haofeng Huang
Haoxu Wang
Kai Jiang
Jun Zhu
Jianfei Chen
    MQ
ArXiv (abs)PDFHTMLHuggingFace (75 upvotes)
Main:10 Pages
23 Figures
Bibliography:4 Pages
12 Tables
Appendix:6 Pages
Abstract

The efficiency of attention is important due to its quadratic time complexity. We enhance the efficiency of attention through two key contributions: First, we leverage the new FP4 Tensor Cores in Blackwell GPUs to accelerate attention computation. Our implementation achieves 1038 TOPS on RTX5090, which is a 5x speedup over the fastest FlashAttention on RTX5090. Experiments show that our FP4 attention can accelerate inference of various models in a plug-and-play way. Second, we pioneer low-bit attention to training tasks. Existing low-bit attention works like FlashAttention3 and SageAttention focus only on inference. However, the efficiency of training large models is also important. To explore whether low-bit attention can be effectively applied to training tasks, we design an accurate and efficient 8-bit attention for both forward and backward propagation. Experiments indicate that 8-bit attention achieves lossless performance in fine-tuning tasks but exhibits slower convergence in pretraining tasks. The code will be available atthis https URL.

View on arXiv
Comments on this paper