59

Interpretable Diffusion Models with B-cos Networks

Nicola Bernold
Moritz Vandenhirtz
Alice Bizeul
Julia E. Vogt
Main:4 Pages
6 Figures
Bibliography:3 Pages
2 Tables
Appendix:5 Pages
Abstract

Text-to-image diffusion models generate images by iteratively denoising random noise, conditioned on a prompt. While these models have enabled impressive progress in image generation, they often fail to accurately reflect all semantic information described in the prompt -- failures that are difficult to detect automatically. In this work, we introduce a diffusion model architecture built with B-cos modules that offers inherent interpretability. Our approach provides insight into how individual prompt tokens affect the generated image by producing explanations that highlight the pixel regions influenced by each token. We demonstrate that B-cos diffusion models can produce high-quality images while providing meaningful insights into prompt-image alignment.

View on arXiv
Comments on this paper