217
v1v2v3 (latest)

Benchmarking Ultra-Low-Power μμNPUs

Main:11 Pages
7 Figures
Bibliography:3 Pages
6 Tables
Appendix:1 Pages
Abstract

Efficient on-device neural network (NN) inference offers predictable latency, improved privacy and reliability, and lower operating costs for vendors than cloud-based inference. This has sparked recent development of microcontroller-scale NN accelerators, also known as neural processing units (μ\muNPUs), designed specifically for ultra-low-power applications. We present the first comparative evaluation of a number of commercially-available μ\muNPUs, including the first independent benchmarks for multiple platforms. To ensure fairness, we develop and open-source a model compilation pipeline supporting consistent benchmarking of quantized models across diverse microcontroller hardware. Our resulting analysis uncovers both expected performance trends as well as surprising disparities between hardware specifications and actual performance, including certain μ\muNPUs exhibiting unexpected scaling behaviors with model complexity. This work provides a foundation for ongoing evaluation of μ\muNPU platforms, alongside offering practical insights for both hardware and software developers in this rapidly evolving space.

View on arXiv
Comments on this paper