ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22084
86
0

MObyGaze: a film dataset of multimodal objectification densely annotated by experts

28 May 2025
Julie Tores
Elisa Ancarani
L. Sassatelli
Hui-Yin Wu
Clement Bergman
Lea Andolfi
Victor Ecrement
Rémy Sun
F. Precioso
Thierry Devars
Magali Guaresi
Virginie Julliard
Sarah Lecossais
    DiffMVGen
ArXiv (abs)PDFHTML
Main:10 Pages
12 Figures
Bibliography:2 Pages
13 Tables
Appendix:15 Pages
Abstract

Characterizing and quantifying gender representation disparities in audiovisual storytelling contents is necessary to grasp how stereotypes may perpetuate on screen. In this article, we consider the high-level construct of objectification and introduce a new AI task to the ML community: characterize and quantify complex multimodal (visual, speech, audio) temporal patterns producing objectification in films. Building on film studies and psychology, we define the construct of objectification in a structured thesaurus involving 5 sub-constructs manifesting through 11 concepts spanning 3 modalities. We introduce the Multimodal Objectifying Gaze (MObyGaze) dataset, made of 20 movies annotated densely by experts for objectification levels and concepts over freely delimited segments: it amounts to 6072 segments over 43 hours of video with fine-grained localization and categorization. We formulate different learning tasks, propose and investigate best ways to learn from the diversity of labels among a low number of annotators, and benchmark recent vision, text and audio models, showing the feasibility of the task. We make our code and our dataset available to the community and described in the Croissant format:this https URL.

View on arXiv
Comments on this paper