Real-Time Diffusion Policies for Games: Enhancing Consistency Policies with Q-Ensembles

Diffusion models have shown impressive performance in capturing complex and multi-modal action distributions for game agents, but their slow inference speed prevents practical deployment in real-time game environments. While consistency models offer a promising approach for one-step generation, they often suffer from training instability and performance degradation when applied to policy learning. In this paper, we present CPQE (Consistency Policy with Q-Ensembles), which combines consistency models with Q-ensembles to address thesethis http URLleverages uncertainty estimation through Q-ensembles to provide more reliable value function approximations, resulting in better training stability and improved performance compared to classic double Q-network methods. Our extensive experiments across multiple game scenarios demonstrate that CPQE achieves inference speeds of up to 60 Hz -- a significant improvement over state-of-the-art diffusion policies that operate at only 20 Hz -- while maintaining comparable performance to multi-step diffusion approaches. CPQE consistently outperforms state-of-the-art consistency model approaches, showing both higher rewards and enhanced training stability throughout the learning process. These results indicate that CPQE offers a practical solution for deploying diffusion-based policies in games and other real-time applications where both multi-modal behavior modeling and rapid inference are critical requirements.
View on arXiv@article{zhang2025_2503.16978, title={ Real-Time Diffusion Policies for Games: Enhancing Consistency Policies with Q-Ensembles }, author={ Ruoqi Zhang and Ziwei Luo and Jens Sjölund and Per Mattsson and Linus Gisslén and Alessandro Sestini }, journal={arXiv preprint arXiv:2503.16978}, year={ 2025 } }