Shared Multi-modal Embedding Space for Face-Voice Association
Christopher Simic
Korbinian Riedhammer
Tobias Bocklet
- CVBM

Main:2 Pages
1 Figures
Bibliography:1 Pages
1 Tables
Abstract
The FAME 2026 challenge comprises two demanding tasks: training face-voice associations combined with a multilingual setting that includes testing on languages on which the model was not trained. Our approach consists of separate uni-modal processing pipelines with general face and voice feature extraction, complemented by additional age-gender feature extraction to support prediction. The resulting single-modal features are projected into a shared embedding space and trained with an Adaptive Angular Margin (AAM) loss. Our approach achieved first place in the FAME 2026 challenge, with an average Equal-Error Rate (EER) of 23.99%.
View on arXivComments on this paper
