ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.08040
57
3

Accurate INT8 Training Through Dynamic Block-Level Fallback

13 March 2025
Pengle Zhang
Jia wei
Jintao Zhang
Jun-Jie Zhu
Jianfei Chen
    MQ
ArXivPDFHTML
Abstract

Transformer models have achieved remarkable success across various AI applications but face significant training costs. Low-bit training, such as INT8 training, can leverage computational units with higher throughput, and has already demonstrated its effectiveness on GPT2 models with block-level quantization. However, it struggles with modern Transformer variants incorporating GLU units. This is because those variants demonstrate complex distributions of activation outliers. To address the challenge, we propose Fallback Quantization, implementing mixed-precision GEMM that dynamically falls back 8-bit to 16-bit for activation blocks containing outliers. Experiments show that our approach is robustly competent in both fine-tuning and pretraining settings. Moreover, our method achieves a 1.57x end-to-end training speedup on RTX4090 GPUs.

View on arXiv
@article{zhang2025_2503.08040,
  title={ Accurate INT8 Training Through Dynamic Block-Level Fallback },
  author={ Pengle Zhang and Jia Wei and Jintao Zhang and Jun Zhu and Jianfei Chen },
  journal={arXiv preprint arXiv:2503.08040},
  year={ 2025 }
}
Comments on this paper