ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.16588
19
0

Data-Assimilated Model-Based Reinforcement Learning for Partially Observed Chaotic Flows

23 April 2025
D. E. Ozan
Andrea Nóvoa
Luca Magri
    AI4CE
ArXivPDFHTML
Abstract

The goal of many applications in energy and transport sectors is to control turbulent flows. However, because of chaotic dynamics and high dimensionality, the control of turbulent flows is exceedingly difficult. Model-free reinforcement learning (RL) methods can discover optimal control policies by interacting with the environment, but they require full state information, which is often unavailable in experimental settings. We propose a data-assimilated model-based RL (DA-MBRL) framework for systems with partial observability and noisy measurements. Our framework employs a control-aware Echo State Network for data-driven prediction of the dynamics, and integrates data assimilation with an Ensemble Kalman Filter for real-time state estimation. An off-policy actor-critic algorithm is employed to learn optimal control strategies from state estimates. The framework is tested on the Kuramoto-Sivashinsky equation, demonstrating its effectiveness in stabilizing a spatiotemporally chaotic flow from noisy and partial measurements.

View on arXiv
@article{ozan2025_2504.16588,
  title={ Data-Assimilated Model-Based Reinforcement Learning for Partially Observed Chaotic Flows },
  author={ Defne E. Ozan and Andrea Nóvoa and Luca Magri },
  journal={arXiv preprint arXiv:2504.16588},
  year={ 2025 }
}
Comments on this paper