RGB-T Tracking Based on Mixed Attention
RGB-T tracking involves the use of images from both visible and thermal modalities. The primary objective is to adaptively lever-age the relatively dominant modality in varying conditions to achieve more robust tracking compared to single-modality track-ing. An RGB-T tracker based on mixed attention mechanism to achieve complementary fusion of modalities (referred to as MACFT) is proposed in this paper. In the feature extraction stage, we utilize different transformer backbone branches to extract specific and shared information from different modali-ties. By performing mixed attention operations in the backbone to enable information interaction and self-enhancement between the template and search images, it constructs a robust feature representation that better understands the high-level semantic features of the target. Then, in the feature fusion stage, a modal-ity-adaptive fusion is achieved through a mixed attention-based modality fusion network, which suppresses the low-quality mo-dality noise while enhancing the information of the dominant modality. Evaluation on multiple RGB-T public datasets demon-strates that our proposed tracker outperforms other RGB-T trackers on general evaluation metrics while also being able to adapt to long-term tracking scenarios.
View on arXiv