92
0

Linear Mixture Distributionally Robust Markov Decision Processes

Main:14 Pages
8 Figures
Bibliography:4 Pages
Appendix:8 Pages
Abstract

Many real-world decision-making problems face the off-dynamics challenge: the agent learns a policy in a source domain and deploys it in a target domain with different state transitions. The distributionally robust Markov decision process (DRMDP) addresses this challenge by finding a robust policy that performs well under the worst-case environment within a pre-specified uncertainty set of transition dynamics. Its effectiveness heavily hinges on the proper design of these uncertainty sets, based on prior knowledge of the dynamics. In this work, we propose a novel linear mixture DRMDP framework, where the nominal dynamics is assumed to be a linear mixture model. In contrast with existing uncertainty sets directly defined as a ball centered around the nominal kernel, linear mixture DRMDPs define the uncertainty sets based on a ball around the mixture weighting parameter. We show that this new framework provides a more refined representation of uncertainties compared to conventional models based on (s,a)(s,a)-rectangularity and dd-rectangularity, when prior knowledge about the mixture model is present. We propose a meta algorithm for robust policy learning in linear mixture DRMDPs with general ff-divergence defined uncertainty sets, and analyze its sample complexities under three divergence metrics instantiations: total variation, Kullback-Leibler, and χ2\chi^2 divergences. These results establish the statistical learnability of linear mixture DRMDPs, laying the theoretical foundation for future research on this new setting.

View on arXiv
@article{liu2025_2505.18044,
  title={ Linear Mixture Distributionally Robust Markov Decision Processes },
  author={ Zhishuai Liu and Pan Xu },
  journal={arXiv preprint arXiv:2505.18044},
  year={ 2025 }
}
Comments on this paper