ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.09676
345
11
v1v2 (latest)

SpatialVID: A Large-Scale Video Dataset with Spatial Annotations

11 September 2025
Jiahao Wang
Yufeng Yuan
Rujie Zheng
Youtian Lin
Jian Gao
Lin Chen
Yajie Bao
Yi Zhang
Chang Zeng
Yanxi Zhou
Xiaoxiao Long
Hao Zhu
Z. Zhang
X. Cao
Yao Yao
    VGen
ArXiv (abs)PDFHTMLHuggingFace (28 upvotes)Github (66512★)
Main:8 Pages
22 Figures
Bibliography:4 Pages
4 Tables
Appendix:13 Pages
Abstract

Significant progress has been made in spatial intelligence, spanning both spatial reconstruction and world exploration. However, the scalability and real-world fidelity of current models remain severely constrained by the scarcity of large-scale, high-quality training data. While several datasets provide camera pose information, they are typically limited in scale, diversity, and annotation richness, particularly for real-world dynamic scenes with ground-truth camera motion. To this end, we collect \textbf{SpatialVID}, a dataset consists of a large corpus of in-the-wild videos with diverse scenes, camera movements and dense 3D annotations such as per-frame camera poses, depth, and motion instructions. Specifically, we collect more than 21,000 hours of raw video, and process them into 2.7 million clips through a hierarchical filtering pipeline, totaling 7,089 hours of dynamic content. A subsequent annotation pipeline enriches these clips with detailed spatial and semantic information, including camera poses, depth maps, dynamic masks, structured captions, and serialized motion instructions. Analysis of SpatialVID's data statistics reveals a richness and diversity that directly foster improved model generalization and performance, establishing it as a key asset for the video and 3D vision research community.

View on arXiv
Comments on this paper