36

Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder

Vladimir Iashin
Horace Lee
Dan Schofield
Andrew Zisserman
Main:5 Pages
3 Figures
Bibliography:1 Pages
4 Tables
Abstract

Camera traps are revolutionising wildlife monitoring by capturing vast amounts of visual data; however, the manual identification of individual animals remains a significant bottleneck. This study introduces a fully self-supervised approach to learning robust chimpanzee face embeddings from unlabeled camera-trap footage. Leveraging the DINOv2 framework, we train Vision Transformers on automatically mined face crops, eliminating the need for identity labels. Our method demonstrates strong open-set re-identification performance, surpassing supervised baselines on challenging benchmarks such as Bossou, despite utilising no labelled data during training. This work underscores the potential of self-supervised learning in biodiversity monitoring and paves the way for scalable, non-invasive population studies.

View on arXiv
Comments on this paper