Parameter-Efficient Tuning of Large Convolutional Models
To address the high computational and parameter complexity associated with fine-tuning large pre-trained models, researchers have developed parameter-efficient methods, where only partial parameters are updated for downstream tasks. However, these works often overlook the distinct properties of convolutional kernels, which still remain essential elements in many large models, such as Stable Diffusion. In this study, we first introduce filter subspace by decomposing convolutional kernels within each network layer over a small set of filter subspace elements, referred to as filter atoms. We then fine-tune these models to extract task-specific representation by only adapting the filter atoms, a few hundred parameters typically. To potentially expand the parameter space for tuning, we further show a simple approach to generate an overcomplete filter subspace by recursively decomposing each filter atom over another set of filter atoms. The fine-tuning of filter atoms reshapes the filter subspace, enabling convolutional layers to adapt to diverse downstream tasks efficiently. Extensive experiments show that such a simple scheme surpasses previous tuning baselines for both discriminate and generative tasks. Our approach can potentially be complementary to many existing fine-tuning methods.
View on arXiv