ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.05343
31
2

EgoOops: A Dataset for Mistake Action Detection from Egocentric Videos Referring to Procedural Texts

7 October 2024
Yuto Haneji
Taichi Nishimura
Hirotaka Kameko
Keisuke Shirai
Tomoya Yoshida
Keiya Kajimura
Koki Yamamoto
Taiyu Cui
Tomohiro Nishimoto
Shinsuke Mori
    EgoV
ArXivPDFHTML
Abstract

Mistake action detection is crucial for developing intelligent archives that detect workers' errors and provide feedback. Existing studies have focused on visually apparent mistakes in free-style activities, resulting in video-only approaches to mistake detection. However, in text-following activities, models cannot determine the correctness of some actions without referring to the texts. Additionally, current mistake datasets rarely use procedural texts for video recording except for cooking. To fill these gaps, this paper proposes the EgoOops dataset, where egocentric videos record erroneous activities when following procedural texts across diverse domains. It features three types of annotations: video-text alignment, mistake labels, and descriptions for mistakes. We also propose a mistake detection approach, combining video-text alignment and mistake label classification to leverage the texts. Our experimental results show that incorporating procedural texts is essential for mistake detection. Data is available throughthis https URL.

View on arXiv
@article{haneji2025_2410.05343,
  title={ EgoOops: A Dataset for Mistake Action Detection from Egocentric Videos Referring to Procedural Texts },
  author={ Yuto Haneji and Taichi Nishimura and Hirotaka Kameko and Keisuke Shirai and Tomoya Yoshida and Keiya Kajimura and Koki Yamamoto and Taiyu Cui and Tomohiro Nishimoto and Shinsuke Mori },
  journal={arXiv preprint arXiv:2410.05343},
  year={ 2025 }
}
Comments on this paper