64
2

EgoPoints: Advancing Point Tracking for Egocentric Videos

Abstract

We introduce EgoPoints, a benchmark for point tracking in egocentric videos. We annotate 4.7K challenging tracks in egocentric sequences. Compared to the popular TAP-Vid-DAVIS evaluation benchmark, we include 9x more points that go out-of-view and 59x more points that require re-identification (ReID) after returning to view. To measure the performance of models on these challenging points, we introduce evaluation metrics that specifically monitor tracking performance on points in-view, out-of-view, and points that require re-identification. We then propose a pipeline to create semi-real sequences, with automatic ground truth. We generate 11K such sequences by combining dynamic Kubric objects with scene points from EPIC Fields. When fine-tuning point tracking methods on these sequences and evaluating on our annotated EgoPoints sequences, we improve CoTracker across all metrics, including the tracking accuracy δavg\delta^\star_{\text{avg}} by 2.7 percentage points and accuracy on ReID sequences (ReIDδavg\delta_{\text{avg}}) by 2.4 points. We also improve δavg\delta^\star_{\text{avg}} and ReIDδavg\delta_{\text{avg}} of PIPs++ by 0.3 and 2.8 respectively.

View on arXiv
Comments on this paper