26
1

Beyond Static Scenes: Camera-controllable Background Generation for Human Motion

Abstract

In this paper, we investigate the generation of new video backgrounds given a human foreground video, a camera pose, and a reference scene image. This task presents three key challenges. First, the generated background should precisely follow the camera movements corresponding to the human foreground. Second, as the camera shifts in different directions, newly revealed content should appear seamless and natural. Third, objects within the video frame should maintain consistent textures as the camera moves to ensure visual coherence. To address these challenges, we propose DynaScene, a new framework that uses camera poses extracted from the original video as an explicit control to drive background motion. Specifically, we design a multi-task learning paradigm that incorporates auxiliary tasks, namely background outpainting and scene variation, to enhance the realism of the generated backgrounds. Given the scarcity of suitable data, we constructed a large-scale, high-quality dataset tailored for this task, comprising video foregrounds, reference scene images, and corresponding camera poses. This dataset contains 200K video clips, ten times larger than existing real-world human video datasets, providing a significantly richer and more diverse training resource. Project page:this https URL

View on arXiv
@article{yao2025_2504.02004,
  title={ Beyond Static Scenes: Camera-controllable Background Generation for Human Motion },
  author={ Mingshuai Yao and Mengting Chen and Qinye Zhou and Yabo Zhang and Ming Liu and Xiaoming Li and Shaohui Liu and Chen Ju and Shuai Xiao and Qingwen Liu and Jinsong Lan and Wangmeng Zuo },
  journal={arXiv preprint arXiv:2504.02004},
  year={ 2025 }
}
Comments on this paper