HOI-R1: Exploring the Potential of Multimodal Large Language Models for Human-Object Interaction Detection
- MLLMVLM
Recent human-object interaction detection (HOID) methods highly require prior knowledge from vision-language models (VLMs) to enhance the interaction recognition capabilities. The training strategies and model architectures for connecting the knowledge from VLMs to the HOI instance representations from the object detector are challenging, and the whole framework is complex for further development or application. On the other hand, the inherent reasoning abilities of multimodal large language models (MLLMs) on human-object interaction detection are under-explored. Inspired by the recent success of training MLLMs with reinforcement learning (RL) methods, we propose HOI-R1 and first explore the potential of the language model on the HOID task without any additional detection modules. We introduce an HOI reasoning process and HOID reward functions to solve the HOID task by pure text. Experiments on HICO-DET across multiple open-source MLLMs, including the Qwen-VL family (Qwen2.5-VL and Qwen3-VL) and Rex-Omni, show consistent improvements. Especially, HOI-R1 boosts Qwen2.5-VL-3B 2 accuracy with great generalization ability. The source code is available atthis https URL.
View on arXiv