ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.10918
8
0
v1v2 (latest)

ToMA: Token Merge with Attention for Diffusion Models

13 September 2025
Wenbo Lu
Shaoyi Zheng
Yuxuan Xia
Shengjie Wang
ArXiv (abs)PDFHTML
Main:9 Pages
9 Figures
Bibliography:2 Pages
12 Tables
Appendix:11 Pages
Abstract

Diffusion models excel in high-fidelity image generation but face scalability limits due to transformers' quadratic attention complexity. Plug-and-play token reduction methods like ToMeSD and ToFu reduce FLOPs by merging redundant tokens in generated images but rely on GPU-inefficient operations (e.g., sorting, scattered writes), introducing overheads that negate theoretical speedups when paired with optimized attention implementations (e.g., FlashAttention). To bridge this gap, we propose Token Merge with Attention (ToMA), an off-the-shelf method that redesigns token reduction for GPU-aligned efficiency, with three key contributions: 1) a reformulation of token merge as a submodular optimization problem to select diverse tokens; 2) merge/unmerge as an attention-like linear transformation via GPU-friendly matrix operations; and 3) exploiting latent locality and sequential redundancy (pattern reuse) to minimize overhead. ToMA reduces SDXL/Flux generation latency by 24%/23%, respectively (with DINO Δ<0.07\Delta < 0.07Δ<0.07), outperforming prior methods. This work bridges the gap between theoretical and practical efficiency for transformers in diffusion.

View on arXiv
Comments on this paper