39
0

Generalized Linear Markov Decision Process

Main:28 Pages
4 Figures
Bibliography:1 Pages
Appendix:5 Pages
Abstract

The linear Markov Decision Process (MDP) framework offers a principled foundation for reinforcement learning (RL) with strong theoretical guarantees and sample efficiency. However, its restrictive assumption-that both transition dynamics and reward functions are linear in the same feature space-limits its applicability in real-world domains, where rewards often exhibit nonlinear or discrete structures. Motivated by applications such as healthcare and e-commerce, where data is scarce and reward signals can be binary or count-valued, we propose the Generalized Linear MDP (GLMDP) framework-an extension of the linear MDP framework-that models rewards using generalized linear models (GLMs) while maintaining linear transition dynamics. We establish the Bellman completeness of GLMDPs with respect to a new function class that accommodates nonlinear rewards and develop two offline RL algorithms: Generalized Pessimistic Value Iteration (GPEVI) and a semi-supervised variant (SS-GPEVI) that utilizes both labeled and unlabeled trajectories. Our algorithms achieve theoretical guarantees on policy suboptimality and demonstrate improved sample efficiency in settings where reward labels are expensive or limited.

View on arXiv
@article{zhang2025_2506.00818,
  title={ Generalized Linear Markov Decision Process },
  author={ Sinian Zhang and Kaicheng Zhang and Ziping Xu and Tianxi Cai and Doudou Zhou },
  journal={arXiv preprint arXiv:2506.00818},
  year={ 2025 }
}
Comments on this paper