ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01104
23
0

VSC: Visual Search Compositional Text-to-Image Diffusion Model

2 May 2025
Do Huu Dat
Nam Hyeonu
Po Yuan Mao
Tae-Hyun Oh
    DiffM
    CoGe
ArXivPDFHTML
Abstract

Text-to-image diffusion models have shown impressive capabilities in generating realistic visuals from natural-language prompts, yet they often struggle with accurately binding attributes to corresponding objects, especially in prompts containing multiple attribute-object pairs. This challenge primarily arises from the limitations of commonly used text encoders, such as CLIP, which can fail to encode complex linguistic relationships and modifiers effectively. Existing approaches have attempted to mitigate these issues through attention map control during inference and the use of layout information or fine-tuning during training, yet they face performance drops with increased prompt complexity. In this work, we introduce a novel compositional generation method that leverages pairwise image embeddings to improve attribute-object binding. Our approach decomposes complex prompts into sub-prompts, generates corresponding images, and computes visual prototypes that fuse with text embeddings to enhance representation. By applying segmentation-based localization training, we address cross-attention misalignment, achieving improved accuracy in binding multiple attributes to objects. Our approaches outperform existing compositional text-to-image diffusion models on the benchmark T2I CompBench, achieving better image quality, evaluated by humans, and emerging robustness under scaling number of binding pairs in the prompt.

View on arXiv
@article{dat2025_2505.01104,
  title={ VSC: Visual Search Compositional Text-to-Image Diffusion Model },
  author={ Do Huu Dat and Nam Hyeonu and Po-Yuan Mao and Tae-Hyun Oh },
  journal={arXiv preprint arXiv:2505.01104},
  year={ 2025 }
}
Comments on this paper