Perception-R1: Pioneering Perception Policy with Reinforcement Learning

Inspired by the success of DeepSeek-R1, we explore the potential of rule-based reinforcement learning (RL) in MLLM post-training for perception policy learning. While promising, our initial experiments reveal that incorporating a thinking process through RL does not consistently lead to performance gains across all visual perception tasks. This leads us to delve into the essential role of RL in the context of visual perception. In this work, we return to the fundamentals and explore the effects of RL on different perception tasks. We observe that the perceptual complexity is a major factor in determining the effectiveness of RL. We also observe that reward design plays a crucial role in further approching the upper limit of model perception. To leverage these findings, we propose Perception-R1, a scalable RL framework using GRPO during MLLM post-training. With a standard Qwen2.5-VL-3B-Instruct, Perception-R1 achieves +4.2% on RefCOCO+, +17.9% on PixMo-Count, +4.2% on PageOCR, and notably, 31.9% AP on COCO2017 val for the first time, establishing a strong baseline for perception policy learning.
View on arXiv@article{yu2025_2504.07954, title={ Perception-R1: Pioneering Perception Policy with Reinforcement Learning }, author={ En Yu and Kangheng Lin and Liang Zhao and Jisheng Yin and Yana Wei and Yuang Peng and Haoran Wei and Jianjian Sun and Chunrui Han and Zheng Ge and Xiangyu Zhang and Daxin Jiang and Jingyu Wang and Wenbing Tao }, journal={arXiv preprint arXiv:2504.07954}, year={ 2025 } }