ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.03506
25
0
v1v2 (latest)

OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows

3 October 2025
John Nguyen
Marton Havasi
Tariq Berrada
Luke Zettlemoyer
Ricky T. Q. Chen
ArXiv (abs)PDFHTMLHuggingFace (9 upvotes)Github (29024★)
Main:13 Pages
17 Figures
Bibliography:5 Pages
6 Tables
Appendix:13 Pages
Abstract

We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation.

View on arXiv
Comments on this paper