ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.14340
36
1

Zero-shot Action Localization via the Confidence of Large Vision-Language Models

18 October 2024
Josiah Aklilu
Xiaohan Wang
Serena Yeung-Levy
ArXivPDFHTML
Abstract

Precise action localization in untrimmed video is vital for fields such as professional sports and minimally invasive surgery, where the delineation of particular motions in recordings can dramatically enhance analysis. But in many cases, large scale datasets with video-label pairs for localization are unavailable, limiting the opportunity to fine-tune video-understanding models. Recent developments in large vision-language models (LVLM) address this need with impressive zero-shot capabilities in a variety of video understanding tasks. However, the adaptation of LVLMs, with their powerful visual question answering capabilities, to zero-shot localization in long-form video is still relatively unexplored. To this end, we introduce a true Zero-shot Action Localization method (ZEAL). Specifically, we leverage the built-in action knowledge of a large language model (LLM) to inflate actions into detailed descriptions of the archetypal start and end of the action. These descriptions serve as queries to LVLM for generating frame-level confidence scores which can be aggregated to produce localization outputs. The simplicity and flexibility of our method lends it amenable to more capable LVLMs as they are developed, and we demonstrate remarkable results in zero-shot action localization on a challenging benchmark, without any training. Our code is publicly available at \href\href{this https URL}{this http URL}\href.

View on arXiv
@article{aklilu2025_2410.14340,
  title={ Zero-shot Action Localization via the Confidence of Large Vision-Language Models },
  author={ Josiah Aklilu and Xiaohan Wang and Serena Yeung-Levy },
  journal={arXiv preprint arXiv:2410.14340},
  year={ 2025 }
}
Comments on this paper