ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06664
72
0

Evaluation of Deep Audio Representations for Hearables

10 February 2025
Fabian Gröger
Pascal Baumann
L. Amruthalingam
Laurent Simon
Ruksana Giurda
Simone Lionetti
ArXivPDFHTML
Abstract

Effectively steering hearable devices requires understanding the acoustic environment around the user. In the computational analysis of sound scenes, foundation models have emerged as the state of the art to produce high-performance, robust, multi-purpose audio representations. We introduce and release Deep Evaluation of Audio Representations (DEAR), the first dataset and benchmark to evaluate the efficacy of foundation models in capturing essential acoustic properties for hearables. The dataset includes 1,158 audio tracks, each 30 seconds long, created by spatially mixing proprietary monologues with commercial, high-quality recordings of everyday acoustic scenes. Our benchmark encompasses eight tasks that assess the general context, speech sources, and technical acoustic properties of the audio scenes. Through our evaluation of four general-purpose audio representation models, we demonstrate that the BEATs model significantly surpasses its counterparts. This superiority underscores the advantage of models trained on diverse audio collections, confirming their applicability to a wide array of auditory tasks, including encoding the environment properties necessary for hearable steering. The DEAR dataset and associated code are available atthis https URL.

View on arXiv
@article{gröger2025_2502.06664,
  title={ Evaluation of Deep Audio Representations for Hearables },
  author={ Fabian Gröger and Pascal Baumann and Ludovic Amruthalingam and Laurent Simon and Ruksana Giurda and Simone Lionetti },
  journal={arXiv preprint arXiv:2502.06664},
  year={ 2025 }
}
Comments on this paper