ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.04037
39
0

The mutual exclusivity bias of bilingual visually grounded speech models

4 June 2025
Dan Oneaţă
Leanne Nortje
Yevgen Matusevych
Herman Kamper
ArXiv (abs)PDFHTML
Main:4 Pages
4 Figures
Bibliography:1 Pages
1 Tables
Abstract

Mutual exclusivity (ME) is a strategy where a novel word is associated with a novel object rather than a familiar one, facilitating language learning in children. Recent work has found an ME bias in a visually grounded speech (VGS) model trained on English speech with paired images. But ME has also been studied in bilingual children, who may employ it less due to cross-lingual ambiguity. We explore this pattern computationally using bilingual VGS models trained on combinations of English, French, and Dutch. We find that bilingual models generally exhibit a weaker ME bias than monolingual models, though exceptions exist. Analyses show that the combined visual embeddings of bilingual models have a smaller variance for familiar data, partly explaining the increase in confusion between novel and familiar concepts. We also provide new insights into why the ME bias exists in VGS models in the first place. Code and data:this https URL

View on arXiv
@article{oneata2025_2506.04037,
  title={ The mutual exclusivity bias of bilingual visually grounded speech models },
  author={ Dan Oneata and Leanne Nortje and Yevgen Matusevych and Herman Kamper },
  journal={arXiv preprint arXiv:2506.04037},
  year={ 2025 }
}
Comments on this paper