GroundingBooth: Grounding Text-to-Image Customization

Recent approaches in text-to-image customization have primarily focused on preserving the identity of the input subject, but often fail to control the spatial location and size of objects. We introduce GroundingBooth, which achieves zero-shot, instance-level spatial grounding on both foreground subjects and background objects in the text-to-image customization task. Our proposed grounding module and subject-grounded cross-attention layer enable the creation of personalized images with accurate layout alignment, identity preservation, and strong text-image coherence. In addition, our model seamlessly supports personalization with multiple subjects. Our model shows strong results in both layout-guided image synthesis and text-to-image customization tasks. The project page is available atthis https URL.
View on arXiv@article{xiong2025_2409.08520, title={ GroundingBooth: Grounding Text-to-Image Customization }, author={ Zhexiao Xiong and Wei Xiong and Jing Shi and He Zhang and Yizhi Song and Nathan Jacobs }, journal={arXiv preprint arXiv:2409.08520}, year={ 2025 } }