279
v1v2v3 (latest)

AVATAR: Reinforcement Learning to See, Hear, and Reason Over Video

Main:8 Pages
16 Figures
Bibliography:3 Pages
10 Tables
Appendix:11 Pages
Abstract

Multimodal reasoning over long-horizon video is challenging due to the need for precise spatiotemporal fusion and alignment across modalities. While recent methods such as Group Relative Policy Optimization (GRPO) have shown promise in this domain, they suffer from three key limitations: (1) data inefficiency from their on-policy design, (2) a vanishing advantage problem, where identical or near-identical rewards within a group eliminate the learning signal by producing zero-valued advantages, and (3) uniform credit assignment that fails to emphasize critical reasoning steps. We introduce AVATAR\textbf{AVATAR} (A\textbf{A}udio-V\textbf{V}ideo A\textbf{A}gent\textbf{t} for A\textbf{A}lignment and R\textbf{R}easoning), a framework that addresses these limitations through two core components: (1) an off-policy training architecture that improves sample efficiency and resolves vanishing advantages by reusing past experiences with greater reward diversity, and (2) Temporal Advantage Shaping (TAS), a novel credit assignment strategy that upweights key reasoning phases during learning. AVATAR\textbf{AVATAR} achieves strong performance across various benchmarks, outperforming the Qwen2.5-Omni baseline by +5.4\mathbf{+5.4} on MMVU, +4.9\mathbf{+4.9} on OmniBench, and +4.5\mathbf{+4.5} on Video-Holmes, while demonstrating \textbf{5$×\times sample efficiency}$, requiring 80%80\% fewer generated completions to reach target performance.

View on arXiv
Comments on this paper