43
1

Edit Transfer: Learning Image Editing via Vision In-Context Relations

Abstract

We introduce a new setting, Edit Transfer, where a model learns a transformation from just a single source-target example and applies it to a new query image. While text-based methods excel at semantic manipulations through textual prompts, they often struggle with precise geometric details (e.g., poses and viewpoint changes). Reference-based editing, on the other hand, typically focuses on style or appearance and fails at non-rigid transformations. By explicitly learning the editing transformation from a source-target pair, Edit Transfer mitigates the limitations of both text-only and appearance-centric references. Drawing inspiration from in-context learning in large language models, we propose a visual relation in-context learning paradigm, building upon a DiT-based text-to-image model. We arrange the edited example and the query image into a unified four-panel composite, then apply lightweight LoRA fine-tuning to capture complex spatial transformations from minimal examples. Despite using only 42 training samples, Edit Transfer substantially outperforms state-of-the-art TIE and RIE methods on diverse non-rigid scenarios, demonstrating the effectiveness of few-shot visual relation learning.

View on arXiv
@article{chen2025_2503.13327,
  title={ Edit Transfer: Learning Image Editing via Vision In-Context Relations },
  author={ Lan Chen and Qi Mao and Yuchao Gu and Mike Zheng Shou },
  journal={arXiv preprint arXiv:2503.13327},
  year={ 2025 }
}
Comments on this paper