ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.01027
62
2

Unleashing In-context Learning of Autoregressive Models for Few-shot Image Manipulation

2 December 2024
Bolin Lai
F. Xu
Miao Liu
Xiaoliang Dai
Nikhil Mehta
Chenguang Zhu
Zeyi Huang
James M. Rehg
Sangmin Lee
Ning Zhang
Tong Xiao
ArXivPDFHTML
Abstract

Text-guided image manipulation has experienced notable advancement in recent years. In order to mitigate linguistic ambiguity, few-shot learning with visual examples has been applied for instructions that are underrepresented in the training set, or difficult to describe purely in language. However, learning from visual prompts requires strong reasoning capability, which diffusion models are struggling with. To address this issue, we introduce a novel multi-modal autoregressive model, dubbed InstaManip\textbf{InstaManip}InstaManip, that can insta\textbf{insta}instantly learn a new image manip\textbf{manip}manipulation operation from textual and visual guidance via in-context learning, and apply it to new query images. Specifically, we propose an innovative group self-attention mechanism to break down the in-context learning process into two separate stages -- learning and applying, which simplifies the complex problem into two easier tasks. We also introduce a relation regularization method to further disentangle image transformation features from irrelevant contents in exemplar images. Extensive experiments suggest that our method surpasses previous few-shot image manipulation models by a notable margin (≥\geq≥19% in human evaluation). We also find our model can be further boosted by increasing the number or diversity of exemplar images.

View on arXiv
Comments on this paper