10
0

Rethinking Discrete Tokens: Treating Them as Conditions for Continuous Autoregressive Image Synthesis

Peng Zheng
Junke Wang
Yi Chang
Yizhou Yu
Rui Ma
Zuxuan Wu
Main:8 Pages
8 Figures
Bibliography:3 Pages
3 Tables
Abstract

Recent advances in large language models (LLMs) have spurred interests in encoding images as discrete tokens and leveraging autoregressive (AR) frameworks for visual generation. However, the quantization process in AR-based visual generation models inherently introduces information loss that degrades image fidelity. To mitigate this limitation, recent studies have explored to autoregressively predict continuous tokens. Unlike discrete tokens that reside in a structured and bounded space, continuous representations exist in an unbounded, high-dimensional space, making density estimation more challenging and increasing the risk of generating out-of-distribution artifacts. Based on the above findings, this work introduces DisCon (Discrete-Conditioned Continuous Autoregressive Model), a novel framework that reinterprets discrete tokens as conditional signals rather than generation targets. By modeling the conditional probability of continuous representations conditioned on discrete tokens, DisCon circumvents the optimization challenges of continuous token modeling while avoiding the information loss caused by quantization. DisCon achieves a gFID score of 1.38 on ImageNet 256×\times256 generation, outperforming state-of-the-art autoregressive approaches by a clear margin.

View on arXiv
@article{zheng2025_2507.01756,
  title={ Rethinking Discrete Tokens: Treating Them as Conditions for Continuous Autoregressive Image Synthesis },
  author={ Peng Zheng and Junke Wang and Yi Chang and Yizhou Yu and Rui Ma and Zuxuan Wu },
  journal={arXiv preprint arXiv:2507.01756},
  year={ 2025 }
}
Comments on this paper