ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.04176
27
27

Multi-modal Visual Tracking: Review and Experimental Comparison

8 December 2020
Pengyu Zhang
Dong Wang
Huchuan Lu
ArXivPDFHTML
Abstract

Visual object tracking, as a fundamental task in computer vision, has drawn much attention in recent years. To extend trackers to a wider range of applications, researchers have introduced information from multiple modalities to handle specific scenes, which is a promising research prospect with emerging methods and benchmarks. To provide a thorough review of multi-modal track-ing, we summarize the multi-modal tracking algorithms, especially visible-depth (RGB-D) tracking and visible-thermal (RGB-T) tracking in a unified taxonomy from different aspects. Second, we provide a detailed description of the related benchmarks and challenges. Furthermore, we conduct extensive experiments to analyze the effectiveness of trackers on five datasets: PTB, VOT19-RGBD, GTOT, RGBT234, and VOT19-RGBT. Finally, we discuss various future directions from different perspectives, including model design and dataset construction for further research.

View on arXiv
Comments on this paper