ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07731
11
0

Spoken Language Understanding on Unseen Tasks With In-Context Learning

12 May 2025
Neeraj Agrawal
Sriram Ganapathy
ArXivPDFHTML
Abstract

Spoken language understanding (SLU) tasks involve diverse skills that probe the information extraction, classification and/or generation capabilities of models. In this setting, task-specific training data may not always be available. While traditional task-specific SLU models are unable to cater to such requirements, the speech-text large language models (LLMs) offer a promising alternative with emergent abilities. However, out of-the-box, our evaluations indicate that the zero/few-shot performance of prominent open-source speech-text LLMs on SLU tasks are not up to the mark. In this paper, we introduce a novel approach to robust task-agnostic fine-tuning using randomized class labels. With this proposed fine-tuning, we illustrate that the performance of the speech-text LLMs on an unseen task is significantly improved over standard approaches. Critically, the proposed approach avoids the requirement of task-specific data annotations for enabling new tasks in speech-text LLMs.

View on arXiv
@article{agrawal2025_2505.07731,
  title={ Spoken Language Understanding on Unseen Tasks With In-Context Learning },
  author={ Neeraj Agrawal and Sriram Ganapathy },
  journal={arXiv preprint arXiv:2505.07731},
  year={ 2025 }
}
Comments on this paper