ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13232
12
0

StarFT: Robust Fine-tuning of Zero-shot Models via Spuriosity Alignment

19 May 2025
Younghyun Kim
Jongheon Jeong
Sangkyung Kwak
Kyungmin Lee
Juho Lee
Jinwoo Shin
ArXivPDFHTML
Abstract

Learning robust representations from data often requires scale, which has led to the success of recent zero-shot models such as CLIP. However, the obtained robustness can easily be deteriorated when these models are fine-tuned on other downstream tasks (e.g., of smaller scales). Previous works often interpret this phenomenon in the context of domain shift, developing fine-tuning methods that aim to preserve the original domain as much as possible. However, in a different context, fine-tuned models with limited data are also prone to learning features that are spurious to humans, such as background or texture. In this paper, we propose StarFT (Spurious Textual Alignment Regularization), a novel framework for fine-tuning zero-shot models to enhance robustness by preventing them from learning spuriosity. We introduce a regularization that aligns the output distribution for spuriosity-injected labels with the original zero-shot model, ensuring that the model is not induced to extract irrelevant features further from these descriptions. We leverage recent language models to get such spuriosity-injected labels by generating alternative textual descriptions that highlight potentially confounding features. Extensive experiments validate the robust generalization of StarFT and its emerging properties: zero-shot group robustness and improved zero-shot classification. Notably, StarFT boosts both worst-group and average accuracy by 14.30% and 3.02%, respectively, in the Waterbirds group shift scenario, where other robust fine-tuning baselines show even degraded performance.

View on arXiv
@article{kim2025_2505.13232,
  title={ StarFT: Robust Fine-tuning of Zero-shot Models via Spuriosity Alignment },
  author={ Younghyun Kim and Jongheon Jeong and Sangkyung Kwak and Kyungmin Lee and Juho Lee and Jinwoo Shin },
  journal={arXiv preprint arXiv:2505.13232},
  year={ 2025 }
}
Comments on this paper