Parameter Efficient Mamba Tuning via Projector-targeted Diagonal-centric Linear Transformation

Despite the growing interest in Mamba architecture as a potential replacement for Transformer architecture, parameter-efficient fine-tuning (PEFT) approaches for Mamba remain largely unexplored. In our study, we introduce two key insights-driven strategies for PEFT in Mamba architecture: (1) While state-space models (SSMs) have been regarded as the cornerstone of Mamba architecture, then expected to play a primary role in transfer learning, our findings reveal that Projectors -- not SSMs -- are the predominant contributors to transfer learning. (2) Based on our observation, we propose a novel PEFT method specialized to Mamba architecture: Projector-targeted Diagonal-centric Linear Transformation (ProDiaL). ProDiaL focuses on optimizing only the pretrained Projectors for new tasks through diagonal-centric linear transformation matrices, without directly fine-tuning the Projector weights. This targeted approach allows efficient task adaptation, utilizing less than 1% of the total parameters, and exhibits strong performance across both vision and language Mamba models, highlighting its versatility and effectiveness.
View on arXiv@article{ham2025_2411.15224, title={ Parameter Efficient Mamba Tuning via Projector-targeted Diagonal-centric Linear Transformation }, author={ Seokil Ham and Hee-Seon Kim and Sangmin Woo and Changick Kim }, journal={arXiv preprint arXiv:2411.15224}, year={ 2025 } }