20
0

HCMA: Hierarchical Cross-model Alignment for Grounded Text-to-Image Generation

Abstract

Text-to-image synthesis has progressed to the point where models can generate visually compelling images from natural language prompts. Yet, existing methods often fail to reconcile high-level semantic fidelity with explicit spatial control, particularly in scenes involving multiple objects, nuanced relations, or complex layouts. To bridge this gap, we propose a Hierarchical Cross-Modal Alignment (HCMA) framework for grounded text-to-image generation. HCMA integrates two alignment modules into each diffusion sampling step: a global module that continuously aligns latent representations with textual descriptions to ensure scene-level coherence, and a local module that employs bounding-box layouts to anchor objects at specified locations, enabling fine-grained spatial control. Extensive experiments on the MS-COCO 2014 validation set show that HCMA surpasses state-of-the-art baselines, achieving a 0.69 improvement in Frechet Inception Distance (FID) and a 0.0295 gain in CLIP Score. These results demonstrate HCMA's effectiveness in faithfully capturing intricate textual semantics while adhering to user-defined spatial constraints, offering a robust solution for semantically grounded imagethis http URLcode is available atthis https URL

View on arXiv
@article{wang2025_2505.06512,
  title={ HCMA: Hierarchical Cross-model Alignment for Grounded Text-to-Image Generation },
  author={ Hang Wang and Zhi-Qi Cheng and Chenhao Lin and Chao Shen and Lei Zhang },
  journal={arXiv preprint arXiv:2505.06512},
  year={ 2025 }
}
Comments on this paper