10
0

ATSTrack: Enhancing Visual-Language Tracking by Aligning Temporal and Spatial Scales

Yihao Zhen
Qiang Wang
Yu Qiao
Liangqiong Qu
Huijie Fan
Main:9 Pages
7 Figures
Bibliography:2 Pages
3 Tables
Abstract

A main challenge of Visual-Language Tracking (VLT) is the misalignment between visual inputs and language descriptions caused by target movement. Previous trackers have explored many effective feature modification methods to preserve more aligned features. However, an important yet unexplored factor ultimately hinders their capability, which is the inherent differences in the temporal and spatial scale of information between visual and language inputs. To address this issue, we propose a novel visual-language tracker that enhances the effect of feature modification by \textbf{A}ligning \textbf{T}emporal and \textbf{S}patial scale of different input components, named as \textbf{ATSTrack}. Specifically, we decompose each language description into phrases with different attributes based on their temporal and spatial correspondence with visual inputs, and modify their features in a fine-grained manner. Moreover, we introduce a Visual-Language token that comprises modified linguistic information from the previous frame to guide the model to extract visual features that are more relevant to language description, thereby reducing the impact caused by the differences in spatial scale. Experimental results show that our proposed ATSTrack achieves performance comparable to existing methods. Our code will be released.

View on arXiv
@article{zhen2025_2507.00454,
  title={ ATSTrack: Enhancing Visual-Language Tracking by Aligning Temporal and Spatial Scales },
  author={ Yihao Zhen and Qiang Wang and Yu Qiao and Liangqiong Qu and Huijie Fan },
  journal={arXiv preprint arXiv:2507.00454},
  year={ 2025 }
}
Comments on this paper