ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.05368
39
0

Exploring Local Interpretable Model-Agnostic Explanations for Speech Emotion Recognition with Distribution-Shift

7 April 2025
Maja J. Hjuler
Line H. Clemmensen
Sneha Das
    FAtt
ArXivPDFHTML
Abstract

We introduce EmoLIME, a version of local interpretable model-agnostic explanations (LIME) for black-box Speech Emotion Recognition (SER) models. To the best of our knowledge, this is the first attempt to apply LIME in SER. EmoLIME generates high-level interpretable explanations and identifies which specific frequency ranges are most influential in determining emotional states. The approach aids in interpreting complex, high-dimensional embeddings such as those generated by end-to-end speech models. We evaluate EmoLIME, qualitatively, quantitatively, and statistically, across three emotional speech datasets, using classifiers trained on both hand-crafted acoustic features and Wav2Vec 2.0 embeddings. We find that EmoLIME exhibits stronger robustness across different models than across datasets with distribution shifts, highlighting its potential for more consistent explanations in SER tasks within a dataset.

View on arXiv
@article{hjuler2025_2504.05368,
  title={ Exploring Local Interpretable Model-Agnostic Explanations for Speech Emotion Recognition with Distribution-Shift },
  author={ Maja J. Hjuler and Line H. Clemmensen and Sneha Das },
  journal={arXiv preprint arXiv:2504.05368},
  year={ 2025 }
}
Comments on this paper