7
0

Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations

Abstract

Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence. However, the wide range of tasks and high computational costs make full fine-tuning impractical. To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus. Despite the success of these methods, they are primarily designed for linear layers, focusing on two-dimensional matrices while largely ignoring higher-dimensional parameter spaces like convolutional kernels. Moreover, directly applying these methods to higher-dimensional parameter spaces often disrupts their structural relationships. Given the rapid advancements in matrix-based PEFT methods, rather than designing a specialized strategy, we propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties. Specifically, we treat parameters as elements of a Lie group, with updates modeled as perturbations in the corresponding Lie algebra. These perturbations are mapped back to the Lie group through the exponential map, ensuring smooth, consistent updates that preserve the inherent structure of the parameter space. Extensive experiments on computer vision and natural language processing validate the effectiveness and versatility of our approach, demonstrating clear improvements over existing methods.

View on arXiv
@article{si2025_2504.00851,
  title={ Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations },
  author={ Chongjie Si and Zhiyi Shi and Xuehui Wang and Yichen Xiao and Xiaokang Yang and Wei Shen },
  journal={arXiv preprint arXiv:2504.00851},
  year={ 2025 }
}
Comments on this paper