10
0

VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers

Yating Wang
Haoyi Zhu
Mingyu Liu
Jiange Yang
Hao-Shu Fang
Tong He
Main:7 Pages
3 Figures
Bibliography:3 Pages
7 Tables
Abstract

In this paper, we introduce an innovative vector quantization based action tokenizer built upon the largest-scale action trajectory dataset to date, leveraging over 100 times more data than previous approaches. This extensive dataset enables our tokenizer to capture rich spatiotemporal dynamics, resulting in a model that not only accelerates inference but also generates smoother and more coherent action outputs. Once trained, the tokenizer can be seamlessly adapted to a wide range of downstream tasks in a zero-shot manner, from short-horizon reactive behaviors to long-horizon planning. A key finding of our work is that the domain gap between synthetic and real action trajectories is marginal, allowing us to effectively utilize a vast amount of synthetic data during training without compromising real-world performance. To validate our approach, we conducted extensive experiments in both simulated environments and on real robotic platforms. The results demonstrate that as the volume of synthetic trajectory data increases, the performance of our tokenizer on downstream tasks improves significantly-most notably, achieving up to a 30% higher success rate on two real-world tasks in long-horizon scenarios. These findings highlight the potential of our action tokenizer as a robust and scalable solution for real-time embodied intelligence systems, paving the way for more efficient and reliable robotic control in diverse applicationthis http URLwebsite:this https URL

View on arXiv
@article{wang2025_2507.01016,
  title={ VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers },
  author={ Yating Wang and Haoyi Zhu and Mingyu Liu and Jiange Yang and Hao-Shu Fang and Tong He },
  journal={arXiv preprint arXiv:2507.01016},
  year={ 2025 }
}
Comments on this paper