630
v1v2v3 (latest)

πRLπ_\texttt{RL}: Online RL Fine-tuning for Flow-based Vision-Language-Action Models

Main:9 Pages
17 Figures
Bibliography:3 Pages
13 Tables
Appendix:12 Pages
Abstract

Vision-Language-Action (VLA) models enable robots to understand and perform complex tasks from multimodal input. Although recent work explores using reinforcement learning (RL) to automate the laborious data collection process in scaling supervised fine-tuning (SFT), applying RL to large-scale flow-based VLAs (\eg, π0\pi_0, π0.5\pi_{0.5}) remains challenging due to intractable action log-likelihoods raised from flow matching. We address this challenge with πRL\pi_{\texttt{RL}}, featuring two technical approaches: (1) \textbf{Flow-Noise} models the denoising process as a discrete-time MDP with a learnable noise network for exact log-likelihood computation. (2) \textbf{Flow-SDE} integrates denoising with agent-environment interaction, formulating a two-layer MDP that employs ODE-to-SDE conversion for efficient RL exploration. We evaluate πRL\pi_{\texttt{RL}} across various benchmarks, with experiments demonstrating that RL yields significant performance improvements in both in-distribution and out-of-distribution settings.

View on arXiv
Comments on this paper