ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17760
37
0

CODA: Repurposing Continuous VAEs for Discrete Tokenization

22 March 2025
Zeyu Liu
Zanlin Ni
Yeguo Hua
Xin Deng
Xiao Ma
Cheng Zhong
Gao Huang
ArXivPDFHTML
Abstract

Discrete visual tokenizers transform images into a sequence of tokens, enabling token-based visual generation akin to language models. However, this process is inherently challenging, as it requires both compressing visual signals into a compact representation and discretizing them into a fixed set of codes. Traditional discrete tokenizers typically learn the two tasks jointly, often leading to unstable training, low codebook utilization, and limited reconstruction quality. In this paper, we introduce \textbf{CODA}(\textbf{CO}ntinuous-to-\textbf{D}iscrete \textbf{A}daptation), a framework that decouples compression and discretization. Instead of training discrete tokenizers from scratch, CODA adapts off-the-shelf continuous VAEs -- already optimized for perceptual compression -- into discrete tokenizers via a carefully designed discretization process. By primarily focusing on discretization, CODA ensures stable and efficient training while retaining the strong visual fidelity of continuous VAEs. Empirically, with 6×\mathbf{6 \times}6× less training budget than standard VQGAN, our approach achieves a remarkable codebook utilization of 100% and notable reconstruction FID (rFID) of 0.43\mathbf{0.43}0.43 and 1.34\mathbf{1.34}1.34 for 8×8 \times8× and 16×16 \times16× compression on ImageNet 256×\times× 256 benchmark.

View on arXiv
@article{liu2025_2503.17760,
  title={ CODA: Repurposing Continuous VAEs for Discrete Tokenization },
  author={ Zeyu Liu and Zanlin Ni and Yeguo Hua and Xin Deng and Xiao Ma and Cheng Zhong and Gao Huang },
  journal={arXiv preprint arXiv:2503.17760},
  year={ 2025 }
}
Comments on this paper