ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16020
56
58

AudioBench: A Universal Benchmark for Audio Large Language Models

23 June 2024
Bin Wang
Xunlong Zou
Geyu Lin
S.
Zhuohan Liu
Wenyu Zhang
Zhengyuan Liu
AiTi Aw
Nancy F. Chen
    AuLLM
    ELM
    LM&MA
ArXivPDFHTML
Abstract

We introduce AudioBench, a universal benchmark designed to evaluate Audio Large Language Models (AudioLLMs). It encompasses 8 distinct tasks and 26 datasets, among which, 7 are newly proposed datasets. The evaluation targets three main aspects: speech understanding, audio scene understanding, and voice understanding (paralinguistic). Despite recent advancements, there lacks a comprehensive benchmark for AudioLLMs on instruction following capabilities conditioned on audio signals. AudioBench addresses this gap by setting up datasets as well as desired evaluation metrics. Besides, we also evaluated the capabilities of five popular models and found that no single model excels consistently across all tasks. We outline the research outlook for AudioLLMs and anticipate that our open-sourced evaluation toolkit, data, and leaderboard will offer a robust testbed for future model developments.

View on arXiv
@article{wang2025_2406.16020,
  title={ AudioBench: A Universal Benchmark for Audio Large Language Models },
  author={ Bin Wang and Xunlong Zou and Geyu Lin and Shuo Sun and Zhuohan Liu and Wenyu Zhang and Zhengyuan Liu and AiTi Aw and Nancy F. Chen },
  journal={arXiv preprint arXiv:2406.16020},
  year={ 2025 }
}
Comments on this paper