83
v1v2 (latest)

What Secrets Do Your Manifolds Hold? Understanding the Local Geometry of Generative Models

International Conference on Learning Representations (ICLR), 2024
Katherine Heller
Golnoosh Farnadi
Negar Rostamzadeh
Mohammad Havaei
Main:10 Pages
32 Figures
Bibliography:4 Pages
Appendix:12 Pages
Abstract

Deep Generative Models are frequently used to learn continuous representations of complex data distributions using a finite number of samples. For any generative model, including pre-trained foundation models with Diffusion or Transformer architectures, generation performance can significantly vary across the learned data manifold. In this paper we study the local geometry of the learned manifold and its relationship to generation outcomes for a wide range of generative models, including DDPM, Diffusion Transformer (DiT), and Stable Diffusion 1.4. Building on the theory of continuous piecewise-linear (CPWL) generators, we characterize the local geometry in terms of three geometric descriptors - scaling (ψ\psi), rank (ν\nu), and complexity/un-smoothness (δ\delta). We provide quantitative and qualitative evidence showing that for a given latent-image pair, the local descriptors are indicative of generation aesthetics, diversity, and memorization by the generative model. Finally, we demonstrate that by training a reward model on the local scaling for Stable Diffusion, we can self-improve both generation aesthetics and diversity using `geometry reward' based guidance during denoising.

View on arXiv
Comments on this paper