Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation Learning
In egocentric video understanding, the motion of hands and objects as well as their interactions play a significant role by nature. However, existing egocentric video representation learning methods mainly focus on aligning video representation with high-level narrations, overlooking the intricate dynamics between hands and objects. In this work, we aim to integrate the modeling of fine-grained hand-object dynamics into the video representation learning process. Since no suitable data is available, we introduce HOD, a novel pipeline employing a hand-object detector and a large language model to generate high-quality narrations with detailed descriptions of hand-object dynamics. To learn these fine-grained dynamics, we propose EgoVideo, a model with a new lightweight motion adapter to capture fine-grained hand-object motion information. Through our co-training strategy, EgoVideo effectively and efficiently leverages the fine-grained hand-object dynamics in the HOD data. Extensive experiments demonstrate that our method achieves state-of-the-art performance across multiple egocentric downstream tasks, including improvements of 6.3% in EK-100 multi-instance retrieval, 5.7% in EK-100 classification, and 16.3% in EGTEA classification in zero-shot settings. Furthermore, our model exhibits robust generalization capabilities in hand-object interaction and robot manipulation tasks. Code and data are available atthis https URL.
View on arXiv@article{pei2025_2503.00986, title={ Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation Learning }, author={ Baoqi Pei and Yifei Huang and Jilan Xu and Guo Chen and Yuping He and Lijin Yang and Yali Wang and Weidi Xie and Yu Qiao and Fei Wu and Limin Wang }, journal={arXiv preprint arXiv:2503.00986}, year={ 2025 } }