49
0

Omni-R1: Do You Really Need Audio to Fine-Tune Your Audio LLM?

Abstract

We propose Omni-R1 which fine-tunes a recent multi-modal LLM, Qwen2.5-Omni, on an audio question answering dataset with the reinforcement learning method GRPO. This leads to new State-of-the-Art performance on the recent MMAU benchmark. Omni-R1 achieves the highest accuracies on the sounds, music, speech, and overall average categories, both on the Test-mini and Test-full splits. To understand the performance improvement, we tested models both with and without audio and found that much of the performance improvement from GRPO could be attributed to better text-based reasoning. We also made a surprising discovery that fine-tuning without audio on a text-only dataset was effective at improving the audio-based performance.

View on arXiv
@article{rouditchenko2025_2505.09439,
  title={ Omni-R1: Do You Really Need Audio to Fine-Tune Your Audio LLM? },
  author={ Andrew Rouditchenko and Saurabhchand Bhati and Edson Araujo and Samuel Thomas and Hilde Kuehne and Rogerio Feris and James Glass },
  journal={arXiv preprint arXiv:2505.09439},
  year={ 2025 }
}
Comments on this paper