ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06307
27
0

Optimizing Large Language Models: Metrics, Energy Efficiency, and Case Study Insights

7 April 2025
Tahniat Khan
Soroor Motie
Sedef Akinli Kocak
Shaina Raza
    MQ
ArXivPDFHTML
Abstract

The rapid adoption of large language models (LLMs) has led to significant energy consumption and carbon emissions, posing a critical challenge to the sustainability of generative AI technologies. This paper explores the integration of energy-efficient optimization techniques in the deployment of LLMs to address these environmental concerns. We present a case study and framework that demonstrate how strategic quantization and local inference techniques can substantially lower the carbon footprints of LLMs without compromising their operational effectiveness. Experimental results reveal that these methods can reduce energy consumption and carbon emissions by up to 45\% post quantization, making them particularly suitable for resource-constrained environments. The findings provide actionable insights for achieving sustainability in AI while maintaining high levels of accuracy and responsiveness.

View on arXiv
@article{khan2025_2504.06307,
  title={ Optimizing Large Language Models: Metrics, Energy Efficiency, and Case Study Insights },
  author={ Tahniat Khan and Soroor Motie and Sedef Akinli Kocak and Shaina Raza },
  journal={arXiv preprint arXiv:2504.06307},
  year={ 2025 }
}
Comments on this paper