29

Towards Generalized Multi-Image Editing for Unified Multimodal Models

Pengcheng Xu
Peng Tang
Donghao Luo
Xiaobin Hu
Weichu Cui
Qingdong He
Zhennan Chen
Jiangning Zhang
Charles Ling
Boyu Wang
Main:8 Pages
15 Figures
Bibliography:4 Pages
7 Tables
Appendix:9 Pages
Abstract

Unified Multimodal Models (UMMs) integrate multimodal understanding and generation, yet they are limited to maintaining visual consistency and disambiguating visual cues when referencing details across multiple input images. In this work, we propose a scalable multi-image editing framework for UMMs that explicitly distinguishes image identities and generalizes to variable input counts. Algorithmically, we introduce two innovations: 1) The learnable latent separators explicitly differentiate each reference image in the latent space, enabling accurate and disentangled conditioning. 2) The sinusoidal index encoding assigns visual tokens from the same image a continuous sinusoidal index embedding, which provides explicit image identity while allowing generalization and extrapolation on a variable number of inputs. To facilitate training and evaluation, we establish a high-fidelity benchmark using an inverse dataset construction methodology to guarantee artifact-free, achievable outputs. Experiments show clear improvements in semantic consistency, visual fidelity, and cross-image integration over prior baselines on diverse multi-image editing tasks, validating our advantages on consistency and generalization ability.

View on arXiv
Comments on this paper