27
0

Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation

Abstract

In federated learning, fine-tuning pre-trained foundation models poses significant challenges, particularly regarding high communication cost and suboptimal model performance due to data heterogeneity between the clients. To address these issues, this paper introduces communication-efficient federated LoRA adaption (CE-LoRA), a method that employs a tri-factorization low-rank adaptation approach with personalized model parameter aggregation. We first presents a novel LoRA parameter factorization by introducing a small-size dense matrix, which can significantly reduce the communication cost and achieve comparable empirical performance than transferring the low-rank parameter matrix used by existing methods. Without violating data privacy, the server considers the client similarity in both training dataset and model parameter space, and learns personalized weights for model aggregation. Our experiments on various LLM and VLM fine-tuning tasks demonstrate that CE-LoRA not only significantly reduces communication overhead but also improves performance under not independently and identically distributed data conditions. In addition, CE-LoRA improves data privacy protection, effectively mitigating gradient-based data reconstruction attacks.

View on arXiv
@article{li2025_2503.23869,
  title={ Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation },
  author={ Yongle Li and Bo Liu and Sheng Huang and ZHeng ZHang and Xiaotong Yuan and Richang Hong },
  journal={arXiv preprint arXiv:2503.23869},
  year={ 2025 }
}
Comments on this paper