33

GradOT: Training-free Gradient-preserving Offsite-tuning for Large Language Models

Kai Yao
Zhaorui Tan
Penglei Gao
Lichun Li
Kaixin Wu
Yinggui Wang
Yuan Zhao
Yixin Ji
Wei Wang
Jianke Zhu
Main:8 Pages
7 Figures
Bibliography:3 Pages
10 Tables
Appendix:5 Pages
Abstract

The rapid growth of large language models (LLMs) with traditional centralized fine-tuning emerges as a key technique for adapting these models to domain-specific challenges, yielding privacy risks for both model and data owners. One promising solution, called offsite-tuning (OT), is proposed to address these challenges, where a weaker emulator is compressed from the original model and further fine-tuned with adapter to enhance privacy. However, the existing OT-based methods require high computational costs and lack theoretical analysis. This paper introduces a novel OT approach based on gradient-preserving compression, named GradOT. By analyzing the OT problem through the lens of optimization, we propose a method that selectively applies compression techniques such as rank compression and channel pruning, preserving the gradients of fine-tuned adapters while ensuring privacy. Extensive experiments demonstrate that our approach surpasses existing OT methods, both in terms of privacy protection and model performance. Our method provides a theoretical foundation for OT and offers a practical, training-free solution for offsite-tuning of large-scale LLMs.

View on arXiv
Comments on this paper