OmniCam: Unified Multimodal Video Generation via Camera Control

Camera control, which achieves diverse visual effects by changing camera position and pose, has attracted widespread attention. However, existing methods face challenges such as complex interaction and limited control capabilities. To address these issues, we present OmniCam, a unified multimodal camera control framework. Leveraging large language models and video diffusion models, OmniCam generates spatio-temporally consistent videos. It supports various combinations of input modalities: the user can provide text or video with expected trajectory as camera path guidance, and image or video as content reference, enabling precise control over camera motion. To facilitate the training of OmniCam, we introduce the OmniTr dataset, which contains a large collection of high-quality long-sequence trajectories, videos, and corresponding descriptions. Experimental results demonstrate that our model achieves state-of-the-art performance in high-quality camera-controlled video generation across various metrics.
View on arXiv@article{yang2025_2504.02312, title={ OmniCam: Unified Multimodal Video Generation via Camera Control }, author={ Xiaoda Yang and Jiayang Xu and Kaixuan Luan and Xinyu Zhan and Hongshun Qiu and Shijun Shi and Hao Li and Shuai Yang and Li Zhang and Checheng Yu and Cewu Lu and Lixin Yang }, journal={arXiv preprint arXiv:2504.02312}, year={ 2025 } }