ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15470
41
0

EgoDTM: Towards 3D-Aware Egocentric Video-Language Pretraining

19 March 2025
Boshen Xu
Yuting Mei
Xinbi Liu
Sipeng Zheng
Qin Jin
    VLM
    MDE
ArXivPDFHTML
Abstract

Egocentric video-language pretraining has significantly advanced video representation learning. Humans perceive and interact with a fully 3D world, developing spatial awareness that extends beyond text-based understanding. However, most previous works learn from 1D text or 2D visual cues, such as bounding boxes, which inherently lack 3D understanding. To bridge this gap, we introduce EgoDTM, an Egocentric Depth- and Text-aware Model, jointly trained through large-scale 3D-aware video pretraining and video-text contrastive learning. EgoDTM incorporates a lightweight 3D-aware decoder to efficiently learn 3D-awareness from pseudo depth maps generated by depth estimation models. To further facilitate 3D-aware video pretraining, we enrich the original brief captions with hand-object visual cues by organically combining several foundation models. Extensive experiments demonstrate EgoDTM's superior performance across diverse downstream tasks, highlighting its superior 3D-aware visual understanding. Our code will be released atthis https URL.

View on arXiv
@article{xu2025_2503.15470,
  title={ EgoDTM: Towards 3D-Aware Egocentric Video-Language Pretraining },
  author={ Boshen Xu and Yuting Mei and Xinbi Liu and Sipeng Zheng and Qin Jin },
  journal={arXiv preprint arXiv:2503.15470},
  year={ 2025 }
}
Comments on this paper