38
0

BLAB: Brutally Long Audio Bench

Abstract

Developing large audio language models (LMs) capable of understanding diverse spoken interactions is essential for accommodating the multimodal nature of human communication and can increase the accessibility of language technologies across different user populations. Recent work on audio LMs has primarily evaluated their performance on short audio segments, typically under 30 seconds, with limited exploration of long-form conversational speech segments that more closely reflect natural user interactions with these models. We introduce Brutally Long Audio Bench (BLAB), a challenging long-form audio benchmark that evaluates audio LMs on localization, duration estimation, emotion, and counting tasks using audio segments averaging 51 minutes in length. BLAB consists of 833+ hours of diverse, full-length audio clips, each paired with human-annotated, text-based natural language questions and answers. Our audio data were collected from permissively licensed sources and underwent a human-assisted filtering process to ensure task compliance. We evaluate six open-source and proprietary audio LMs on BLAB and find that all of them, including advanced models such as Gemini 2.0 Pro and GPT-4o, struggle with the tasks in BLAB. Our comprehensive analysis reveals key insights into the trade-offs between task difficulty and audio duration. In general, we find that audio LMs struggle with long-form speech, with performance declining as duration increases. They perform poorly on localization, temporal reasoning, counting, and struggle to understand non-phonemic information, relying more on prompts than audio content. BLAB serves as a challenging evaluation framework to develop audio LMs with robust long-form audio understanding capabilities.

View on arXiv
@article{ahia2025_2505.03054,
  title={ BLAB: Brutally Long Audio Bench },
  author={ Orevaoghene Ahia and Martijn Bartelds and Kabir Ahuja and Hila Gonen and Valentin Hofmann and Siddhant Arora and Shuyue Stella Li and Vishal Puttagunta and Mofetoluwa Adeyemi and Charishma Buchireddy and Ben Walls and Noah Bennett and Shinji Watanabe and Noah A. Smith and Yulia Tsvetkov and Sachin Kumar },
  journal={arXiv preprint arXiv:2505.03054},
  year={ 2025 }
}
Comments on this paper