ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03664
24
0

PIPO: Pipelined Offloading for Efficient Inference on Consumer Devices

15 March 2025
Yangyijian Liu
Jun Yu Li
Wu-Jun Li
ArXivPDFHTML
Abstract

The high memory and computation demand of large language models (LLMs) makes them challenging to be deployed on consumer devices due to limited GPU memory. Offloading can mitigate the memory constraint but often suffers from low GPU utilization, leading to low inference efficiency. In this work, we propose a novel framework, called pipelined offloading (PIPO), for efficient inference on consumer devices. PIPO designs a fine-grained offloading pipeline, complemented with optimized data transfer and computation, to achieve high concurrency and efficient scheduling for inference. Experimental results show that compared with state-of-the-art baseline, PIPO increases GPU utilization from below 40% to over 90% and achieves up to 3.1×\times× higher throughput, running on a laptop equipped with a RTX3060 GPU of 6GB memory.

View on arXiv
@article{liu2025_2504.03664,
  title={ PIPO: Pipelined Offloading for Efficient Inference on Consumer Devices },
  author={ Yangyijian Liu and Jun Li and Wu-Jun Li },
  journal={arXiv preprint arXiv:2504.03664},
  year={ 2025 }
}
Comments on this paper