ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.18372
79
7
v1v2 (latest)

OpenEvents V1: Large-Scale Benchmark Dataset for Multimodal Event Grounding

23 June 2025
Hieu Nguyen
Phuc-Tan Nguyen
T. Tran
Minh-Quang Nguyen
Tam V. Nguyen
Minh-Triet Tran
T. Le
    ObjDVLM
ArXiv (abs)PDFHTML
Main:6 Pages
7 Figures
Bibliography:1 Pages
4 Tables
Abstract

We introduce OpenEvents V1a large-scale benchmark dataset designed to advance event-centric vision-language understanding. Unlike conventional image captioning and retrieval datasets that focus on surface-level descriptions, OpenEvents V1 dataset emphasizes contextual and temporal grounding through three primary tasks: (1) generating rich, event-aware image captions, (2) retrieving event-relevant news articles from image queries, and (3) retrieving event-relevant images from narrative-style textual queries. The dataset comprises over 200,000 news articles and 400,000 associated images sourced from CNN and The Guardian, spanning diverse domains and time periods. We provide extensive baseline results and standardized evaluation protocols for all tasks. OpenEvents V1 establishes a robust foundation for developing multimodal AI systems capable of deep reasoning over complex real-world events. The dataset is publicly available atthis https URL.

View on arXiv
Comments on this paper