71
v1v2 (latest)

SLO-aware GPU Frequency Scaling for Energy Efficient LLM Inference Serving

International Symposium on High-Performance Computer Architecture (HPCA), 2024
Main:12 Pages
16 Figures
Bibliography:3 Pages
3 Tables
Abstract

As Large Language Models (LLMs) gain traction, their reliance on power-hungry GPUs places ever-increasing energy demands, raising environmental and monetary concerns. Inference dominates LLM workloads, presenting a critical challenge for providers: minimizing energy costs under Service-Level Objectives (SLOs) that ensure optimal user experience. In this paper, we present \textit{throttLLéM}, a framework that reduces energy consumption while meeting SLOs through the use of instance and GPU frequency scaling. \textit{throttLLéM} features mechanisms that project future KV cache usage and batch size. Leveraging a Machine-Learning (ML) model that receives these projections as inputs, \textit{throttLLéM} manages performance at the iteration level to satisfy SLOs with reduced frequencies and instance sizes. We show that the proposed ML model achieves R2R^2 scores greater than 0.97 and miss-predicts performance by less than 1 iteration per second on average. Experimental results on LLM inference traces show that \textit{throttLLéM} achieves up to 43.8\% lower energy consumption and an energy efficiency improvement of at least 1.71×1.71\times under SLOs, when compared to NVIDIA's Triton server.

View on arXiv
Comments on this paper