21
0

How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference

Abstract

This paper introduces a novel infrastructure-aware benchmarking framework for quantifying the environmental footprint of LLM inference across 30 state-of-the-art models as deployed in commercial data centers. Our framework combines public API performance data with region-specific environmental multipliers and statistical inference of hardware configurations. We additionally utilize cross-efficiency Data Envelopment Analysis (DEA) to rank models by performance relative to environmental cost. Our results show that o3 and DeepSeek-R1 emerge as the most energy-intensive models, consuming over 33 Wh per long prompt, more than 70 times the consumption of GPT-4.1 nano, and that Claude-3.7 Sonnet ranks highest in eco-efficiency. While a single short GPT-4o query consumes 0.43 Wh, scaling this to 700 million queries/day results in substantial annual environmental impacts. These include electricity use comparable to 35,000 U.S. homes, freshwater evaporation matching the annual drinking needs of 1.2 million people, and carbon emissions requiring a Chicago-sized forest to offset. These findings illustrate a growing paradox: Although AI is becoming cheaper and faster, its global adoption drives disproportionate resource consumption. Our study provides a standardized, empirically grounded methodology for benchmarking the sustainability of LLM deployments, laying a foundation for future environmental accountability in AI development and sustainability standards.

View on arXiv
@article{jegham2025_2505.09598,
  title={ How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference },
  author={ Nidhal Jegham and Marwen Abdelatti and Lassad Elmoubarki and Abdeltawab Hendawi },
  journal={arXiv preprint arXiv:2505.09598},
  year={ 2025 }
}
Comments on this paper