Towards Understanding Camera Motions in Any Video

We introduce CameraBench, a large-scale dataset and benchmark designed to assess and improve camera motion understanding. CameraBench consists of ~3,000 diverse internet videos, annotated by experts through a rigorous multi-stage quality control process. One of our contributions is a taxonomy of camera motion primitives, designed in collaboration with cinematographers. We find, for example, that some motions like "follow" (or tracking) require understanding scene content like moving subjects. We conduct a large-scale human study to quantify human annotation performance, revealing that domain expertise and tutorial-based training can significantly enhance accuracy. For example, a novice may confuse zoom-in (a change of intrinsics) with translating forward (a change of extrinsics), but can be trained to differentiate the two. Using CameraBench, we evaluate Structure-from-Motion (SfM) and Video-Language Models (VLMs), finding that SfM models struggle to capture semantic primitives that depend on scene content, while VLMs struggle to capture geometric primitives that require precise estimation of trajectories. We then fine-tune a generative VLM on CameraBench to achieve the best of both worlds and showcase its applications, including motion-augmented captioning, video question answering, and video-text retrieval. We hope our taxonomy, benchmark, and tutorials will drive future efforts towards the ultimate goal of understanding camera motions in any video.
View on arXiv@article{lin2025_2504.15376, title={ Towards Understanding Camera Motions in Any Video }, author={ Zhiqiu Lin and Siyuan Cen and Daniel Jiang and Jay Karhade and Hewei Wang and Chancharik Mitra and Tiffany Ling and Yuhan Huang and Sifan Liu and Mingyu Chen and Rushikesh Zawar and Xue Bai and Yilun Du and Chuang Gan and Deva Ramanan }, journal={arXiv preprint arXiv:2504.15376}, year={ 2025 } }