ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03718
16
0

Task-Aware Parameter-Efficient Fine-Tuning of Large Pre-Trained Models at the Edge

29 March 2025
Senkang Hu
Yanan Ma
Yihang Tao
Zhengru Fang
Zihan Fang
Yiqin Deng
Sam Kwong
Yuguang Fang
ArXivPDFHTML
Abstract

Large language models (LLMs) have achieved remarkable success in various tasks, such as decision-making, reasoning, and question answering. They have been widely used in edge devices. However, fine-tuning LLMs to specific tasks at the edge is challenging due to the high computational cost and the limited storage and energy resources at the edge. To address this issue, we propose TaskEdge, a task-aware parameter-efficient fine-tuning framework at the edge, which allocates the most effective parameters to the target task and only updates the task-specific parameters. Specifically, we first design a parameter importance calculation criterion that incorporates both weights and input activations into the computation of weight importance. Then, we propose a model-agnostic task-specific parameter allocation algorithm to ensure that task-specific parameters are distributed evenly across the model, rather than being concentrated in specific regions. In doing so, TaskEdge can significantly reduce the computational cost and memory usage while maintaining performance on the target downstream tasks by updating less than 0.1\% of the parameters. In addition, TaskEdge can be easily integrated with structured sparsity to enable acceleration by NVIDIA's specialized sparse tensor cores, and it can be seamlessly integrated with LoRA to enable efficient sparse low-rank adaptation. Extensive experiments on various tasks demonstrate the effectiveness of TaskEdge.

View on arXiv
@article{hu2025_2504.03718,
  title={ Task-Aware Parameter-Efficient Fine-Tuning of Large Pre-Trained Models at the Edge },
  author={ Senkang Hu and Yanan Ma and Yihang Tao and Zhengru Fang and Zihan Fang and Yiqin Deng and Sam Kwong and Yuguang Fang },
  journal={arXiv preprint arXiv:2504.03718},
  year={ 2025 }
}
Comments on this paper