19
0

FISH-Tuning: Enhancing PEFT Methods with Fisher Information

Abstract

The rapid growth in the parameter size of Large Language Models (LLMs) has led to the development of Parameter-Efficient Fine-Tuning (PEFT) methods to alleviate the computational costs of fine-tuning. Among these, Fisher Induced Sparse uncHanging (FISH) Mask is a selection-based PEFT technique that identifies a subset of pre-trained parameters for fine-tuning based on approximate Fisher information. However, the integration of FISH Mask with other PEFT methods, such as LoRA and Adapters, remains underexplored. In this paper, we propose FISH-Tuning, a novel approach that incorporates FISH Mask into addition-based and reparameterization-based PEFT methods, including LoRA, Adapters, and their variants. By leveraging Fisher information to select critical parameters within these methods, FISH-Tuning achieves superior performance without additional memory overhead or inference latency. Experimental results across various datasets and pre-trained models demonstrate that FISH-Tuning consistently outperforms the vanilla PEFT methods with the same proportion of trainable parameters.

View on arXiv
@article{xue2025_2504.04050,
  title={ FISH-Tuning: Enhancing PEFT Methods with Fisher Information },
  author={ Kang Xue and Ming Dong and Xinhui Tu and Tingting He },
  journal={arXiv preprint arXiv:2504.04050},
  year={ 2025 }
}
Comments on this paper