Adapting World Models with Latent-State Dynamics Residuals

Simulation-to-reality reinforcement learning (RL) faces the critical challenge of reconciling discrepancies between simulated and real-world dynamics, which can severely degrade agent performance. A promising approach involves learning corrections to simulator forward dynamics represented as a residual error function, however this operation is impractical with high-dimensional states such as images. To overcome this, we propose ReDRAW, a latent-state autoregressive world model pretrained in simulation and calibrated to target environments through residual corrections of latent-state dynamics rather than of explicit observed states. Using this adapted world model, ReDRAW enables RL agents to be optimized with imagined rollouts under corrected dynamics and then deployed in the real world. In multiple vision-based MuJoCo domains and a physical robot visual lane-following task, ReDRAW effectively models changes to dynamics and avoids overfitting in low data regimes where traditional transfer methods fail.
View on arXiv@article{lanier2025_2504.02252, title={ Adapting World Models with Latent-State Dynamics Residuals }, author={ JB Lanier and Kyungmin Kim and Armin Karamzade and Yifei Liu and Ankita Sinha and Kat He and Davide Corsi and Roy Fox }, journal={arXiv preprint arXiv:2504.02252}, year={ 2025 } }