ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.01438
15
0

EdgeLoRA: An Efficient Multi-Tenant LLM Serving System on Edge Devices

2 July 2025
Zheyu Shen
Yexiao He
Ziyao Wang
Yuning Zhang
Guoheng Sun
Wanghao Ye
Ang Li
ArXiv (abs)PDFHTML
Main:11 Pages
9 Figures
Bibliography:4 Pages
14 Tables
Appendix:1 Pages
Abstract

Large Language Models (LLMs) have gained significant attention due to their versatility across a wide array of applications. Fine-tuning LLMs with parameter-efficient adapters, such as Low-Rank Adaptation (LoRA), enables these models to efficiently adapt to downstream tasks without extensive retraining. Deploying fine-tuned LLMs on multi-tenant edge devices offers substantial benefits, such as reduced latency, enhanced privacy, and personalized responses. However, serving LLMs efficiently on resource-constrained edge devices presents critical challenges, including the complexity of adapter selection for different tasks and memory overhead from frequent adapter swapping. Moreover, given the multiple requests in multi-tenant settings, processing requests sequentially results in underutilization of computational resources and increased latency. This paper introduces EdgeLoRA, an efficient system for serving LLMs on edge devices in multi-tenant environments. EdgeLoRA incorporates three key innovations: (1) an adaptive adapter selection mechanism to streamline the adapter configuration process; (2) heterogeneous memory management, leveraging intelligent adapter caching and pooling to mitigate memory operation overhead; and (3) batch LoRA inference, enabling efficient batch processing to significantly reduce computational latency. Comprehensive evaluations using the Llama3.1-8B model demonstrate that EdgeLoRA significantly outperforms the status quo (i.e.,this http URL) in terms of both latency and throughput. The results demonstrate that EdgeLoRA can achieve up to a 4 times boost in throughput. Even more impressively, it can serve several orders of magnitude more adapters simultaneously. These results highlight EdgeLoRA's potential to transform edge deployment of LLMs in multi-tenant scenarios, offering a scalable and efficient solution for resource-constrained environments.

View on arXiv
@article{shen2025_2507.01438,
  title={ EdgeLoRA: An Efficient Multi-Tenant LLM Serving System on Edge Devices },
  author={ Zheyu Shen and Yexiao He and Ziyao Wang and Yuning Zhang and Guoheng Sun and Wanghao Ye and Ang Li },
  journal={arXiv preprint arXiv:2507.01438},
  year={ 2025 }
}
Comments on this paper