ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.11922
  4. Cited By
SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking
  with Motion-Aware Memory

SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory

18 November 2024
Cheng-Yen Yang
Hsiang-Wei Huang
Wenhao Chai
Zhongyu Jiang
Jenq-Neng Hwang
    VLM
ArXivPDFHTML

Papers citing "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"

3 / 3 papers shown
Title
MoSAM: Motion-Guided Segment Anything Model with Spatial-Temporal Memory Selection
MoSAM: Motion-Guided Segment Anything Model with Spatial-Temporal Memory Selection
Q. Yang
Yuan Yao
Miaomiao Cui
Liefeng Bo
VLM
49
0
0
30 Apr 2025
SAM2MOT: A Novel Paradigm of Multi-Object Tracking by Segmentation
SAM2MOT: A Novel Paradigm of Multi-Object Tracking by Segmentation
Junjie Jiang
Zelin Wang
Manqi Zhao
Yin Li
Dongsheng Jiang
31
0
0
06 Apr 2025
MemorySAM: Memorize Modalities and Semantics with Segment Anything Model 2 for Multi-modal Semantic Segmentation
MemorySAM: Memorize Modalities and Semantics with Segment Anything Model 2 for Multi-modal Semantic Segmentation
Chenfei Liao
Xu Zheng
Yuanhuiyi Lyu
Haiwei Xue
Yihong Cao
Jiawen Wang
Kailun Yang
Xuming Hu
VLM
43
2
0
09 Mar 2025
1