ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.19342
424
4
v1v2v3v4 (latest)

Parameter-Efficient Fine-Tuning via Circular Convolution

27 July 2024
Chenyi Zi
Jiashun Cheng
Zijing Liu
Ziqi Gao
Fugee Tsung
Yu-Feng Li
Jia Li
ArXiv (abs)PDFHTML
Main:8 Pages
7 Figures
Bibliography:4 Pages
10 Tables
Appendix:4 Pages
Abstract

Low-Rank Adaptation (LoRA) has gained popularity for fine-tuning large foundation models, leveraging low-rank matrices A\mathbf{A}A and B\mathbf{B}B to represent weight changes (i.e., ΔW=BA\Delta \mathbf{W} = \mathbf{B} \mathbf{A}ΔW=BA). This method reduces trainable parameters and mitigates heavy memory consumption associated with full delta matrices by sequentially multiplying A\mathbf{A}A and B\mathbf{B}B with the activation. Despite its success, the intrinsic low-rank characteristic may limit its performance. Although several variants have been proposed to address this issue, they often overlook the crucial computational and memory efficiency brought by LoRA. In this paper, we propose Circular Convolution Adaptation (C3^33A), which not only achieves high-rank adaptation with enhanced performance but also excels in both computational power and memory utilization. Extensive experiments demonstrate that C3^33A consistently outperforms LoRA and its variants across various fine-tuning tasks.

View on arXiv
Comments on this paper