ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.21583
28
0

Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image Generation

24 October 2025
Yifu Luo
Penghui Du
Bo Li
Sinan Du
Tiantian Zhang
Yongzhe Chang
Kai Wu
Kun Gai
Xueqian Wang
ArXiv (abs)PDFHTMLHuggingFace (30 upvotes)Github (24539★)
Main:11 Pages
11 Figures
Bibliography:4 Pages
6 Tables
Appendix:5 Pages
Abstract

Group Relative Policy Optimization (GRPO) has shown strong potential for flow-matching-based text-to-image (T2I) generation, but it faces two key limitations: inaccurate advantage attribution, and the neglect of temporal dynamics of generation. In this work, we argue that shifting the optimization paradigm from the step level to the chunk level can effectively alleviate these issues. Building on this idea, we propose Chunk-GRPO, the first chunk-level GRPO-based approach for T2I generation. The insight is to group consecutive steps into coherent 'chunk's that capture the intrinsic temporal dynamics of flow matching, and to optimize policies at the chunk level. In addition, we introduce an optional weighted sampling strategy to further enhance performance. Extensive experiments show that ChunkGRPO achieves superior results in both preference alignment and image quality, highlighting the promise of chunk-level optimization for GRPO-based methods.

View on arXiv
Comments on this paper