ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.15819
  4. Cited By
Do self-supervised speech models develop human-like perception biases?

Do self-supervised speech models develop human-like perception biases?

31 May 2022
Juliette Millet
Ewan Dunbar
    SSL
ArXivPDFHTML

Papers citing "Do self-supervised speech models develop human-like perception biases?"

12 / 12 papers shown
Title
fastabx: A library for efficient computation of ABX discriminability
fastabx: A library for efficient computation of ABX discriminability
Maxime Poli
Emmanuel Chemla
Emmanuel Dupoux
34
0
0
05 May 2025
Human-like Linguistic Biases in Neural Speech Models: Phonetic
  Categorization and Phonotactic Constraints in Wav2Vec2.0
Human-like Linguistic Biases in Neural Speech Models: Phonetic Categorization and Phonotactic Constraints in Wav2Vec2.0
Marianne de Heer Kloots
Willem H. Zuidema
27
3
0
03 Jul 2024
On the social bias of speech self-supervised models
On the social bias of speech self-supervised models
Yi-Cheng Lin
T. Lin
Hsi-Che Lin
Andy T. Liu
Hung-yi Lee
37
3
0
07 Jun 2024
A predictive learning model can simulate temporal dynamics and context
  effects found in neural representations of continuous speech
A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech
Oli Danyi Liu
Hao Tang
Naomi H Feldman
Sharon Goldwater
24
1
0
13 May 2024
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech
  Transformers
Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers
Hosein Mohebbi
Grzegorz Chrupała
Willem H. Zuidema
A. Alishahi
28
12
0
15 Oct 2023
Probing self-supervised speech models for phonetic and phonemic
  information: a case study in aspiration
Probing self-supervised speech models for phonetic and phonemic information: a case study in aspiration
Kinan Martin
Jon Gauthier
Canaan Breiss
R. Levy
SSL
19
14
0
09 Jun 2023
Acoustic absement in detail: Quantifying acoustic differences across
  time-series representations of speech data
Acoustic absement in detail: Quantifying acoustic differences across time-series representations of speech data
Matthew C. Kelley
17
1
0
12 Apr 2023
Self-supervised language learning from raw audio: Lessons from the Zero
  Resource Speech Challenge
Self-supervised language learning from raw audio: Lessons from the Zero Resource Speech Challenge
Ewan Dunbar
Nicolas Hamilakis
Emmanuel Dupoux
SSL
26
30
0
27 Oct 2022
Decoding speech perception from non-invasive brain recordings
Decoding speech perception from non-invasive brain recordings
Alexandre Défossez
Charlotte Caucheteux
Jérémy Rapin
Ori Kabeli
J. King
30
115
0
25 Aug 2022
Toward a realistic model of speech processing in the brain with
  self-supervised learning
Toward a realistic model of speech processing in the brain with self-supervised learning
Juliette Millet
Charlotte Caucheteux
Pierre Orhan
Yves Boubenec
Alexandre Gramfort
Ewan Dunbar
Christophe Pallier
J. King
25
93
0
03 Jun 2022
Probing phoneme, language and speaker information in unsupervised speech
  representations
Probing phoneme, language and speaker information in unsupervised speech representations
Maureen de Seyssel
Marvin Lavechin
Yossi Adi
Emmanuel Dupoux
Guillaume Wisniewski
SSL
15
20
0
30 Mar 2022
Generative Spoken Language Modeling from Raw Audio
Generative Spoken Language Modeling from Raw Audio
Kushal Lakhotia
Evgeny Kharitonov
Wei-Ning Hsu
Yossi Adi
Adam Polyak
...
Tu Nguyen
Jade Copet
Alexei Baevski
A. Mohamed
Emmanuel Dupoux
AuLLM
180
336
0
01 Feb 2021
1