ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06019
48
0

Noise is an Efficient Learner for Zero-Shot Vision-Language Models

9 February 2025
Raza Imam
Asif Hanif
Jian Zhang
Khaled Waleed Dawoud
Yova Kementchedjhieva
Mohammad Yaqub
    VLM
ArXivPDFHTML
Abstract

Recently, test-time adaptation has garnered attention as a method for tuning models without labeled data. The conventional modus operandi for adapting pre-trained vision-language models (VLMs) during test-time primarily focuses on tuning learnable prompts; however, this approach overlooks potential distribution shifts in the visual representations themselves. In this work, we address this limitation by introducing Test-Time Noise Tuning (TNT), a novel method for handling unpredictable shifts in the visual space. TNT leverages, for the first time, a noise adaptation strategy that optimizes learnable noise directly in the visual input space, enabling adaptive feature learning from a single test sample. We further introduce a novel approach for inter-view representation alignment by explicitly enforcing coherence in embedding distances, ensuring consistent feature representations across views. Combined with scaled logits and confident view selection at inference, TNT substantially enhances VLM generalization and calibration, achieving average gains of +7.38% on natural distributions benchmark and +0.80% on cross-dataset evaluations over zero-shot CLIP. These improvements lay a strong foundation for adaptive out-of-distribution handling.

View on arXiv
@article{imam2025_2502.06019,
  title={ Noise is an Efficient Learner for Zero-Shot Vision-Language Models },
  author={ Raza Imam and Asif Hanif and Jian Zhang and Khaled Waleed Dawoud and Yova Kementchedjhieva and Mohammad Yaqub },
  journal={arXiv preprint arXiv:2502.06019},
  year={ 2025 }
}
Comments on this paper