50

Scaling Zero-Shot Reference-to-Video Generation

Zijian Zhou
Shikun Liu
Haozhe Liu
Haonan Qiu
Zhaochong An
Weiming Ren
Zhiheng Liu
Xiaoke Huang
Kam Woh Ng
Tian Xie
Xiao Han
Yuren Cong
Hang Li
Chuyan Zhu
Aditya Patel
Tao Xiang
Sen He
7 Figures
Bibliography:1 Pages
2 Tables
Appendix:12 Pages
Abstract

Reference-to-video (R2V) generation aims to synthesize videos that align with a text prompt while preserving the subject identity from reference images. However, current R2V methods are hindered by the reliance on explicit reference image-video-text triplets, whose construction is highly expensive and difficult to scale. We bypass this bottleneck by introducing Saber, a scalable zero-shot framework that requires no explicit R2V data. Trained exclusively on video-text pairs, Saber employs a masked training strategy and a tailored attention-based model design to learn identity-consistent and reference-aware representations. Mask augmentation techniques are further integrated to mitigate copy-paste artifacts common in reference-to-video generation. Moreover, Saber demonstrates remarkable generalization capabilities across a varying number of references and achieves superior performance on the OpenS2V-Eval benchmark compared to methods trained with R2V data.

View on arXiv
Comments on this paper