ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05904
58
0

Proactive Assistant Dialogue Generation from Streaming Egocentric Videos

6 June 2025
Yichi Zhang
Xin Luna Dong
Zhaojiang Lin
Andrea Madotto
Anuj Kumar
Babak Damavandi
J. Chai
Seungwhan Moon
ArXiv (abs)PDFHTML
Main:3 Pages
11 Figures
11 Tables
Appendix:22 Pages
Abstract

Recent advances in conversational AI have been substantial, but developing real-time systems for perceptual task guidance remains challenging. These systems must provide interactive, proactive assistance based on streaming visual inputs, yet their development is constrained by the costly and labor-intensive process of data collection and system evaluation. To address these limitations, we present a comprehensive framework with three key contributions. First, we introduce a novel data curation pipeline that synthesizes dialogues from annotated egocentric videos, resulting in \dataset, a large-scale synthetic dialogue dataset spanning multiple domains. Second, we develop a suite of automatic evaluation metrics, validated through extensive human studies. Third, we propose an end-to-end model that processes streaming video inputs to generate contextually appropriate responses, incorporating novel techniques for handling data imbalance and long-duration videos. This work lays the foundation for developing real-time, proactive AI assistants capable of guiding users through diverse tasks. Project page:this https URL

View on arXiv
@article{zhang2025_2506.05904,
  title={ Proactive Assistant Dialogue Generation from Streaming Egocentric Videos },
  author={ Yichi Zhang and Xin Luna Dong and Zhaojiang Lin and Andrea Madotto and Anuj Kumar and Babak Damavandi and Joyce Chai and Seungwhan Moon },
  journal={arXiv preprint arXiv:2506.05904},
  year={ 2025 }
}
Comments on this paper