ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13934
17
0

RLVR-World: Training World Models with Reinforcement Learning

20 May 2025
Jialong Wu
Shaofeng Yin
Ningya Feng
Mingsheng Long
    OffRL
    VGen
ArXivPDFHTML
Abstract

World models predict state transitions in response to actions and are increasingly developed across diverse modalities. However, standard training objectives such as maximum likelihood estimation (MLE) often misalign with task-specific goals of world models, i.e., transition prediction metrics like accuracy or perceptual quality. In this paper, we present RLVR-World, a unified framework that leverages reinforcement learning with verifiable rewards (RLVR) to directly optimize world models for such metrics. Despite formulating world modeling as autoregressive prediction of tokenized sequences, RLVR-World evaluates metrics of decoded predictions as verifiable rewards. We demonstrate substantial performance gains on both language- and video-based world models across domains, including text games, web navigation, and robot manipulation. Our work indicates that, beyond recent advances in reasoning language models, RLVR offers a promising post-training paradigm for enhancing the utility of generative models more broadly.

View on arXiv
@article{wu2025_2505.13934,
  title={ RLVR-World: Training World Models with Reinforcement Learning },
  author={ Jialong Wu and Shaofeng Yin and Ningya Feng and Mingsheng Long },
  journal={arXiv preprint arXiv:2505.13934},
  year={ 2025 }
}
Comments on this paper