42
0

Sparsity Outperforms Low-Rank Projections in Few-Shot Adaptation

Abstract

Adapting Vision-Language Models (VLMs) to new domains with few labeled samples remains a significant challenge due to severe overfitting and computational constraints. State-of-the-art solutions, such as low-rank reparameterization, mitigate these issues but often struggle with generalization and require extensive hyperparameter tuning. In this paper, a novel Sparse Optimization (SO) framework is proposed. Unlike low-rank approaches that typically constrain updates to a fixed subspace, our SO method leverages high sparsity to dynamically adjust very few parameters. We introduce two key paradigms. First, we advocate for \textit{local sparsity and global density}, which updates a minimal subset of parameters per iteration while maintaining overall model expressiveness. As a second paradigm, we advocate for \textit{local randomness and global importance}, which sparsifies the gradient using random selection while pruning the first moment based on importance. This combination significantly mitigates overfitting and ensures stable adaptation in low-data regimes. Extensive experiments on 11 diverse datasets show that SO achieves state-of-the-art few-shot adaptation performance while reducing memory overhead.

View on arXiv
@article{mrabah2025_2504.12436,
  title={ Sparsity Outperforms Low-Rank Projections in Few-Shot Adaptation },
  author={ Nairouz Mrabah and Nicolas Richet and Ismail Ben Ayed and Éric Granger },
  journal={arXiv preprint arXiv:2504.12436},
  year={ 2025 }
}
Comments on this paper