14
0

TAPNext: Tracking Any Point (TAP) as Next Token Prediction

Abstract

Tracking Any Point (TAP) in a video is a challenging computer vision problem with many demonstrated applications in robotics, video editing, and 3D reconstruction. Existing methods for TAP rely heavily on complex tracking-specific inductive biases and heuristics, limiting their generality and potential for scaling. To address these challenges, we present TAPNext, a new approach that casts TAP as sequential masked token decoding. Our model is causal, tracks in a purely online fashion, and removes tracking-specific inductive biases. This enables TAPNext to run with minimal latency, and removes the temporal windowing required by many existing state of art trackers. Despite its simplicity, TAPNext achieves a new state-of-the-art tracking performance among both online and offline trackers. Finally, we present evidence that many widely used tracking heuristics emerge naturally in TAPNext through end-to-end training. The TAPNext model and code can be found atthis https URL.

View on arXiv
@article{zholus2025_2504.05579,
  title={ TAPNext: Tracking Any Point (TAP) as Next Token Prediction },
  author={ Artem Zholus and Carl Doersch and Yi Yang and Skanda Koppula and Viorica Patraucean and Xu Owen He and Ignacio Rocco and Mehdi S. M. Sajjadi and Sarath Chandar and Ross Goroshin },
  journal={arXiv preprint arXiv:2504.05579},
  year={ 2025 }
}
Comments on this paper