ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.16182
28
6

EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World

24 March 2024
Yifei Huang
Guo Chen
Jilan Xu
Mingfang Zhang
Lijin Yang
Baoqi Pei
Hongjie Zhang
Lu Dong
Yali Wang
Limin Wang
Yu Qiao
    EgoV
ArXivPDFHTML
Abstract

Being able to map the activities of others into one's own point of view is one fundamental human skill even from a very early age. Taking a step toward understanding this human ability, we introduce EgoExoLearn, a large-scale dataset that emulates the human demonstration following process, in which individuals record egocentric videos as they execute tasks guided by demonstration videos. Focusing on the potential applications in daily assistance and professional support, EgoExoLearn contains egocentric and demonstration video data spanning 120 hours captured in daily life scenarios and specialized laboratories. Along with the videos we record high-quality gaze data and provide detailed multimodal annotations, formulating a playground for modeling the human ability to bridge asynchronous procedural actions from different viewpoints. To this end, we present benchmarks such as cross-view association, cross-view action planning, and cross-view referenced skill assessment, along with detailed analysis. We expect EgoExoLearn can serve as an important resource for bridging the actions across views, thus paving the way for creating AI agents capable of seamlessly learning by observing humans in the real world. Code and data can be found at:this https URL

View on arXiv
@article{huang2025_2403.16182,
  title={ EgoExoLearn: A Dataset for Bridging Asynchronous Ego- and Exo-centric View of Procedural Activities in Real World },
  author={ Yifei Huang and Guo Chen and Jilan Xu and Mingfang Zhang and Lijin Yang and Baoqi Pei and Hongjie Zhang and Lu Dong and Yali Wang and Limin Wang and Yu Qiao },
  journal={arXiv preprint arXiv:2403.16182},
  year={ 2025 }
}
Comments on this paper