ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.04753
160
0

CPO: Condition Preference Optimization for Controllable Image Generation

6 November 2025
Zonglin Lyu
Ming Li
Xinxin Liu
Chen Chen
ArXiv (abs)PDFHTML
Main:10 Pages
20 Figures
Bibliography:3 Pages
10 Tables
Appendix:21 Pages
Abstract

To enhance controllability in text-to-image generation, ControlNet introduces image-based control signals, while ControlNet++ improves pixel-level cycle consistency between generated images and the input control signal. To avoid the prohibitive cost of back-propagating through the sampling process, ControlNet++ optimizes only low-noise timesteps (e.g., t<200t < 200t<200) using a single-step approximation, which not only ignores the contribution of high-noise timesteps but also introduces additional approximation errors. A straightforward alternative for optimizing controllability across all timesteps is Direct Preference Optimization (DPO), a fine-tuning method that increases model preference for more controllable images (IwI^{w}Iw) over less controllable ones (IlI^{l}Il). However, due to uncertainty in generative models, it is difficult to ensure that win--lose image pairs differ only in controllability while keeping other factors, such as image quality, fixed. To address this, we propose performing preference learning over control conditions rather than generated images. Specifically, we construct winning and losing control signals, cw\mathbf{c}^{w}cw and cl\mathbf{c}^{l}cl, and train the model to prefer cw\mathbf{c}^{w}cw. This method, which we term \textit{Condition Preference Optimization} (CPO), eliminates confounding factors and yields a low-variance training objective. Our approach theoretically exhibits lower contrastive loss variance than DPO and empirically achieves superior results. Moreover, CPO requires less computation and storage for dataset curation. Extensive experiments show that CPO significantly improves controllability over the state-of-the-art ControlNet++ across multiple control types: over 10%10\%10% error rate reduction in segmentation, 707070--80%80\%80% in human pose, and consistent 222--5%5\%5% reductions in edge and depth maps.

View on arXiv
Comments on this paper