37
0

Efficient Precision-Scalable Hardware for Microscaling (MX) Processing in Robotics Learning

Main:6 Pages
9 Figures
Bibliography:1 Pages
Abstract

Autonomous robots require efficient on-device learning to adapt to new environments without cloud dependency. For this edge training, Microscaling (MX) data types offer a promising solution by combining integer and floating-point representations with shared exponents, reducing energy consumption while maintaining accuracy. However, the state-of-the-art continuous learning processor, namely Dacapo, faces limitations with its MXINT-only support and inefficient vector-based grouping during backpropagation. In this paper, we present, to the best of our knowledge, the first work that addresses these limitations with two key innovations: (1) a precision-scalable arithmetic unit that supports all six MX data types by exploiting sub-word parallelism and unified integer and floating-point processing; and (2) support for square shared exponent groups to enable efficient weight handling during backpropagation, removing storage redundancy and quantization overhead. We evaluate our design against Dacapo under iso-peak-throughput on four robotics workloads in TSMC 16nm FinFET technology at 500MHz, reaching a 25.6% area reduction, a 51% lower memory footprint, and 4x higher effective training throughput while achieving comparable energy-efficiency, enabling efficient robotics continual learning at the edge.

View on arXiv
@article{cuyckens2025_2505.22404,
  title={ Efficient Precision-Scalable Hardware for Microscaling (MX) Processing in Robotics Learning },
  author={ Stef Cuyckens and Xiaoling Yi and Nitish Satya Murthy and Chao Fang and Marian Verhelst },
  journal={arXiv preprint arXiv:2505.22404},
  year={ 2025 }
}
Comments on this paper