54

When Scaling Fails: Mitigating Audio Perception Decay of LALMs via Multi-Step Perception-Aware Reasoning

Ruixiang Mao
Xiangnan Ma
Dan Chen
Ziming Zhu
Yuan Ge
Aokai Hao
Haishu Zhao
Yifu Huo
Qing Yang
Kaiyan Chang
Xiaoqian Liu
Chenglong Wang
Qiaozhi He
Tong Xiao
Jingbo Zhu
Main:8 Pages
9 Figures
Bibliography:2 Pages
6 Tables
Appendix:18 Pages
Abstract

Test-Time Scaling has shown notable efficacy in addressing complex problems through scaling inference compute. However, within Large Audio-Language Models (LALMs), an unintuitive phenomenon exists: post-training models for structured reasoning trajectories results in marginal or even negative gains compared to post-training for direct answering. To investigate it, we introduce CAFE, an evaluation framework designed to precisely quantify audio reasoning errors. Evaluation results reveal LALMs struggle with perception during reasoning and encounter a critical bottleneck: reasoning performance suffers from audio perception decay as reasoning length extends. To address it, we propose MPAR2^2, a paradigm that encourages dynamic perceptual reasoning and decomposes complex questions into perception-rich sub-problems. Leveraging reinforcement learning, MPAR2^2 improves perception performance on CAFE from 31.74% to 63.51% and effectively mitigates perception decay, concurrently enhancing reasoning capabilities to achieve a significant 74.59% accuracy on the MMAU benchmark. Further analysis demonstrates that MPAR2^2 reinforces LALMs to attend to audio input and dynamically adapts reasoning budget to match task complexity.

View on arXiv
Comments on this paper