83
0
v1v2 (latest)

Efficient Open Set Single Image Test Time Adaptation of Vision Language Models

Main:12 Pages
7 Figures
Bibliography:2 Pages
23 Tables
Appendix:15 Pages
Abstract

Adapting models to dynamic, real-world environments characterized by shifting data distributions and unseen test scenarios is a critical challenge in deep learning. In this paper, we consider a realistic and challenging Test-Time Adaptation setting, where a model must continuously adapt to test samples that arrive sequentially, one at a time, while distinguishing between known and unknown classes. Current Test-Time Adaptation methods operate under closed-set assumptions or batch processing, differing from the real-world open-set scenarios. We address this limitation by establishing a comprehensive benchmark for {\em Open-set Single-image Test-Time Adaptation using Vision-Language Models}. Furthermore, we propose ROSITA, a novel framework that leverages dynamically updated feature banks to identify reliable test samples and employs a contrastive learning objective to improve the separation between known and unknown classes. Our approach effectively adapts models to domain shifts for known classes while rejecting unfamiliar samples. Extensive experiments across diverse real-world benchmarks demonstrate that ROSITA sets a new state-of-the-art in open-set TTA, achieving both strong performance and computational efficiency for real-time deployment. Our code can be found at the project sitethis https URL

View on arXiv
@article{sreenivas2025_2406.00481,
  title={ Efficient Open Set Single Image Test Time Adaptation of Vision Language Models },
  author={ Manogna Sreenivas and Soma Biswas },
  journal={arXiv preprint arXiv:2406.00481},
  year={ 2025 }
}
Comments on this paper