92
0

Space Rotation with Basis Transformation for Training-free Test-Time Adaptation

Abstract

With the development of visual-language models (VLM) in downstream task applications, test-time adaptation methods based on VLM have attracted increasing attention for their ability to address changes distribution in test-time. Although prior approaches have achieved some progress, they typically either demand substantial computational resources or are constrained by the limitations of the original feature space, rendering them less effective for test-time adaptation tasks. To address these challenges, we propose a training-free feature space rotation with basis transformation for test-time adaptation. By leveraging the inherent distinctions among classes, we reconstruct the original feature space and map it to a new representation, thereby enhancing the clarity of class differences and providing more effective guidance for the model during testing. Additionally, to better capture relevant information from various classes, we maintain a dynamic queue to store representative samples. Experimental results across multiple benchmarks demonstrate that our method outperforms state-of-the-art techniques in terms of both performance and efficiency.

View on arXiv
@article{ding2025_2502.19946,
  title={ Space Rotation with Basis Transformation for Training-free Test-Time Adaptation },
  author={ Chenhao Ding and Xinyuan Gao and Songlin Dong and Yuhang He and Qiang Wang and Xiang Song and Alex Kot and Yihong Gong },
  journal={arXiv preprint arXiv:2502.19946},
  year={ 2025 }
}
Comments on this paper