24
0

Fine-Tuning Video-Text Contrastive Model for Primate Behavior Retrieval from Unlabeled Raw Videos

Abstract

Video recordings of nonhuman primates in their natural habitat are a common source for studying their behavior in the wild. We fine-tune pre-trained video-text foundational models for the specific domain of capuchin monkeys, with the goal of developing useful computational models to help researchers to retrieve useful clips from videos. We focus on the challenging problem of training a model based solely on raw, unlabeled video footage, using weak audio descriptions sometimes provided by field collaborators. We leverage recent advances in Multimodal Large Language Models (MLLMs) and Vision-Language Models (VLMs) to address the extremely noisy nature of both video and audio content. Specifically, we propose a two-folded approach: an agentic data treatment pipeline and a fine-tuning process. The data processing pipeline automatically extracts clean and semantically aligned video-text pairs from the raw videos, which are subsequently used to fine-tune a pre-trained Microsoft's X-CLIP model through Low-Rank Adaptation (LoRA). We obtained an uplift in Hits@5Hits@5 of 167%167\% for the 16 frames model and an uplift of 114%114\% for the 8 frame model on our domain data. Moreover, based on NDCG@KNDCG@K results, our model is able to rank well most of the considered behaviors, while the tested raw pre-trained models are not able to rank them at all. The code will be made available upon acceptance.

View on arXiv
@article{santo2025_2505.05681,
  title={ Fine-Tuning Video-Text Contrastive Model for Primate Behavior Retrieval from Unlabeled Raw Videos },
  author={ Giulio Cesare Mastrocinque Santo and Patrícia Izar and Irene Delval and Victor de Napole Gregolin and Nina S. T. Hirata },
  journal={arXiv preprint arXiv:2505.05681},
  year={ 2025 }
}
Comments on this paper