ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.10537
16
16

Microscaling Data Formats for Deep Learning

16 October 2023
B. Rouhani
Ritchie Zhao
Ankit More
Mathew Hall
Alireza Khodamoradi
Summer Deng
Dhruv Choudhary
Marius Cornea
Eric Dellinger
K. Denolf
Dusan Stosic
V. Elango
Maximilian Golub
Alexander Heinecke
Phil James-Roxby
Dharmesh Jani
Gaurav Kolhe
M. Langhammer
Ada Li
Levi Melnick
Maral Mesmakhosroshahi
Andres Rodriguez
Michael Schulte
Rasoul Shafipour
Lei Shao
Michael Siu
Pradeep Dubey
Paulius Micikevicius
Maxim Naumov
Colin Verilli
Ralph Wittig
Doug Burger
Eric S. Chung
    MQ
ArXivPDFHTML
Abstract

Narrow bit-width data formats are key to reducing the computational and storage costs of modern deep learning applications. This paper evaluates Microscaling (MX) data formats that combine a per-block scaling factor with narrow floating-point and integer types for individual elements. MX formats balance the competing needs of hardware efficiency, model accuracy, and user friction. Empirical results on over two dozen benchmarks demonstrate practicality of MX data formats as a drop-in replacement for baseline FP32 for AI inference and training with low user friction. We also show the first instance of training generative language models at sub-8-bit weights, activations, and gradients with minimal accuracy loss and no modifications to the training recipe.

View on arXiv
Comments on this paper