16
v1v2 (latest)

Where Do the Joules Go? Diagnosing Inference Energy Consumption

Jae-Won Chung
Ruofan Wu
Jeff J. Ma
Mosharaf Chowdhury
Main:8 Pages
13 Figures
Bibliography:3 Pages
4 Tables
Appendix:2 Pages
Abstract

Energy is now a critical ML computing resource. While measuring energy consumption and observing trends is a valuable first step, accurately understanding and diagnosing why those differences occur is crucial for optimization. To that end, we begin by presenting a large-scale measurement study of inference time and energy across the generative AI landscape with 46 models, 7 tasks, and 1,858 different configurations on NVIDIA H100 and B200 GPUs. Our empirical findings span order-of-magnitude variations: LLM task type can lead to 25×\times energy differences, video generation sometimes consumes more than 100×\times the energy of images, and GPU utilization differences can result in 3--5×\times energy differences. Based on our observations, we present a framework for reasoning about the underlying mechanisms that govern time and energy consumption. The essence is that time and energy are determined by latent metrics like memory and utilization, which are in turn affected by various factors across the algorithm, software, and hardware layers. Our framework also extends directly to throughput per watt, a critical metric for power-constrained datacenters.

View on arXiv
Comments on this paper