ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07867
29
0

SAMJAM: Zero-Shot Video Scene Graph Generation for Egocentric Kitchen Videos

10 April 2025
Joshua Li
Fernando Jose Pena Cantu
Emily Yu
A. Wong
Yuchen Cui
Yuhao Chen
    VLM
ArXivPDFHTML
Abstract

Video Scene Graph Generation (VidSGG) is an important topic in understanding dynamic kitchen environments. Current models for VidSGG require extensive training to produce scene graphs. Recently, Vision Language Models (VLM) and Vision Foundation Models (VFM) have demonstrated impressive zero-shot capabilities in a variety of tasks. However, VLMs like Gemini struggle with the dynamics for VidSGG, failing to maintain stable object identities across frames. To overcome this limitation, we propose SAMJAM, a zero-shot pipeline that combines SAM2's temporal tracking with Gemini's semantic understanding. SAM2 also improves upon Gemini's object grounding by producing more accurate bounding boxes. In our method, we first prompt Gemini to generate a frame-level scene graph. Then, we employ a matching algorithm to map each object in the scene graph with a SAM2-generated or SAM2-propagated mask, producing a temporally-consistent scene graph in dynamic environments. Finally, we repeat this process again in each of the following frames. We empirically demonstrate that SAMJAM outperforms Gemini by 8.33% in mean recall on the EPIC-KITCHENS and EPIC-KITCHENS-100 datasets.

View on arXiv
@article{li2025_2504.07867,
  title={ SAMJAM: Zero-Shot Video Scene Graph Generation for Egocentric Kitchen Videos },
  author={ Joshua Li and Fernando Jose Pena Cantu and Emily Yu and Alexander Wong and Yuchen Cui and Yuhao Chen },
  journal={arXiv preprint arXiv:2504.07867},
  year={ 2025 }
}
Comments on this paper