All Papers
Title |
|---|
Title |
|---|

Multimodal reasoning over long-horizon video is challenging due to the need for precise spatiotemporal fusion and alignment across modalities. While recent methods such as Group Relative Policy Optimization (GRPO) have shown promise in this domain, they suffer from three key limitations: (1) data inefficiency from their on-policy design, (2) a vanishing advantage problem, where identical or near-identical rewards within a group eliminate the learning signal by producing zero-valued advantages, and (3) uniform credit assignment that fails to emphasize critical reasoning steps. We introduce (udio-ideo gen for lignment and easoning), a framework that addresses these limitations through two core components: (1) an off-policy training architecture that improves sample efficiency and resolves vanishing advantages by reusing past experiences with greater reward diversity, and (2) Temporal Advantage Shaping (TAS), a novel credit assignment strategy that upweights key reasoning phases during learning. achieves strong performance across various benchmarks, outperforming the Qwen2.5-Omni baseline by on MMVU, on OmniBench, and on Video-Holmes, while demonstrating \textbf{5$ sample efficiency}$, requiring fewer generated completions to reach target performance.
View on arXiv