ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15338
56
0

Solla: Towards a Speech-Oriented LLM That Hears Acoustic Context

19 March 2025
Junyi Ao
Dekun Chen
Xiaohai Tian
Wenjie Feng
J. Zhang
Lu Lu
Y. Wang
Haizhou Li
Zhizheng Wu
    AuLLM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have recently shown remarkable ability to process not only text but also multimodal inputs such as speech and audio. However, most existing models primarily focus on analyzing input signals using text instructions, overlooking scenarios in which speech instructions and audio are mixed and serve as inputs to the model. To address these challenges, we introduce Solla, a novel framework designed to understand speech-based questions and hear the acoustic context concurrently. Solla incorporates an audio tagging module to effectively identify and represent audio events, as well as an ASR-assisted prediction method to improve comprehension of spoken content. To rigorously evaluate Solla and other publicly available models, we propose a new benchmark dataset called SA-Eval, which includes three tasks: audio event classification, audio captioning, and audio question answering. SA-Eval has diverse speech instruction with various speaking styles, encompassing two difficulty levels, easy and hard, to capture the range of real-world acoustic conditions. Experimental results show that Solla performs on par with or outperforms baseline models on both the easy and hard test sets, underscoring its effectiveness in jointly understanding speech and audio.

View on arXiv
@article{ao2025_2503.15338,
  title={ Solla: Towards a Speech-Oriented LLM That Hears Acoustic Context },
  author={ Junyi Ao and Dekun Chen and Xiaohai Tian and Wenjie Feng and Jun Zhang and Lu Lu and Yuxuan Wang and Haizhou Li and Zhizheng Wu },
  journal={arXiv preprint arXiv:2503.15338},
  year={ 2025 }
}
Comments on this paper