Despite rapid advancements in the capabilities of generative models, pretrained text-to-image models still struggle in capturing the semantics conveyed by complex prompts that compound multiple objects and instance-level attributes. Consequently, we are witnessing growing interests in integrating additional structural constraints, typically in the form of coarse bounding boxes, to better guide the generation process in such challenging cases. In this work, we take the idea of structural guidance a step further by making the observation that contemporary image generation models can directly provide a plausible fine-grained structural initialization. We propose a technique that couples this image-based structural guidance with LLM-based instance-level instructions, yielding output images that adhere to all parts of the text prompt, including object counts, instance-level attributes, and spatial relations between instances.
View on arXiv@article{sella2025_2505.05678, title={ InstanceGen: Image Generation with Instance-level Instructions }, author={ Etai Sella and Yanir Kleiman and Hadar Averbuch-Elor }, journal={arXiv preprint arXiv:2505.05678}, year={ 2025 } }