Leveraging Compute-in-Memory for Efficient Generative Model Inference in TPUs
With the rapid advent of generative models, efficiently deploying these models on specialized hardware has become critical. Tensor Processing Units (TPUs) are designed to accelerate AI workloads, but their high power consumption necessitates innovations for improving efficiency. Compute-in-memory (CIM) has emerged as a promising paradigm with superior area and energy efficiency. In this work, we present a TPU architecture that integrates digital CIM to replace conventional digital systolic arrays in matrix multiply units (MXUs). We first establish a CIM-based TPU architecture model and simulator to evaluate the benefits of CIM for diverse generative model inference. Building upon the observed design insights, we further explore various CIM-based TPU architectural design choices. Up to 44.2% and 33.8% performance improvement for large language model and diffusion transformer inference, and 27.3x reduction in MXU energy consumption can be achieved with different design choices, compared to the baseline TPUv4i architecture.
View on arXiv@article{zhu2025_2503.00461, title={ Leveraging Compute-in-Memory for Efficient Generative Model Inference in TPUs }, author={ Zhantong Zhu and Hongou Li and Wenjie Ren and Meng Wu and Le Ye and Ru Huang and Tianyu Jia }, journal={arXiv preprint arXiv:2503.00461}, year={ 2025 } }