7
0

Kwai Keye-VL Technical Report

Kwai Keye Team
Biao Yang
Bin Wen
Changyi Liu
Chenglong Chu
Chengru Song
Chongling Rao
Chuan Yi
Da Li
Dunju Zang
Fan Yang
Guorui Zhou
Hao Peng
Haojie Ding
Jiaming Huang
Jiangxia Cao
Jiankang Chen
Jingyun Hua
Jin Ouyang
Kaibing Chen
Kaiyu Jiang
Kaiyu Tang
Kun Gai
Shengnan Zhang
Siyang Mao
Sui Huang
Tianke Zhang
Tingting Gao
Wei Chen
Wei Yuan
Xiangyu Wu
Xiao Hu
Xingyu Lu
Yang Zhou
Yi-Fan Zhang
Yiping Yang
Yulong Chen
Zhenhua Wu
Zhenyu Li
Zhixin Ling
Ziming Li
Dehua Ma
Di Xu
Haixuan Gao
Hang Li
Jiawei Guo
Jing Wang
Lejian Ren
Muhao Wei
Qianqian Wang
Qigen Hu
Shiyao Wang
Tao Yu
Xinchen Luo
Yan Li
Yiming Liang
Yuhang Hu
Zeyi Lu
Zhuoran Yang
Zixing Zhang
Main:20 Pages
13 Figures
Bibliography:8 Pages
11 Tables
Appendix:13 Pages
Abstract

While Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities on static images, they often fall short in comprehending dynamic, information-dense short-form videos, a dominant medium in today's digital landscape. To bridge this gap, we introduce \textbf{Kwai Keye-VL}, an 8-billion-parameter multimodal foundation model engineered for leading-edge performance in short-video understanding while maintaining robust general-purpose vision-language abilities. The development of Keye-VL rests on two core pillars: a massive, high-quality dataset exceeding 600 billion tokens with a strong emphasis on video, and an innovative training recipe. This recipe features a four-stage pre-training process for solid vision-language alignment, followed by a meticulous two-phase post-training process. The first post-training stage enhances foundational capabilities like instruction following, while the second phase focuses on stimulating advanced reasoning. In this second phase, a key innovation is our five-mode ``cold-start'' data mixture, which includes ``thinking'', ``non-thinking'', ``auto-think'', ``think with image'', and high-quality video data. This mixture teaches the model to decide when and how to reason. Subsequent reinforcement learning (RL) and alignment steps further enhance these reasoning capabilities and correct abnormal model behaviors, such as repetitive outputs. To validate our approach, we conduct extensive evaluations, showing that Keye-VL achieves state-of-the-art results on public video benchmarks and remains highly competitive on general image-based tasks (Figure 1). Furthermore, we develop and release the \textbf{KC-MMBench}, a new benchmark tailored for real-world short-video scenarios, where Keye-VL shows a significant advantage.

View on arXiv
@article{team2025_2507.01949,
  title={ Kwai Keye-VL Technical Report },
  author={ Kwai Keye Team and Biao Yang and Bin Wen and Changyi Liu and Chenglong Chu and Chengru Song and Chongling Rao and Chuan Yi and Da Li and Dunju Zang and Fan Yang and Guorui Zhou and Hao Peng and Haojie Ding and Jiaming Huang and Jiangxia Cao and Jiankang Chen and Jingyun Hua and Jin Ouyang and Kaibing Chen and Kaiyu Jiang and Kaiyu Tang and Kun Gai and Shengnan Zhang and Siyang Mao and Sui Huang and Tianke Zhang and Tingting Gao and Wei Chen and Wei Yuan and Xiangyu Wu and Xiao Hu and Xingyu Lu and Yang Zhou and Yi-Fan Zhang and Yiping Yang and Yulong Chen and Zhenhua Wu and Zhenyu Li and Zhixin Ling and Ziming Li and Dehua Ma and Di Xu and Haixuan Gao and Hang Li and Jiawei Guo and Jing Wang and Lejian Ren and Muhao Wei and Qianqian Wang and Qigen Hu and Shiyao Wang and Tao Yu and Xinchen Luo and Yan Li and Yiming Liang and Yuhang Hu and Zeyi Lu and Zhuoran Yang and Zixing Zhang },
  journal={arXiv preprint arXiv:2507.01949},
  year={ 2025 }
}
Comments on this paper