We study infinite-horizon Discounted Markov Decision Processes (DMDPs) under a generative model. Motivated by the Algorithm with Advice framework Mitzenmacher and Vassilvitskii 2022, we propose a novel framework to investigate how a prediction on the transition matrix can enhance the sample efficiency in solving DMDPs and improve sample complexity bounds. We focus on the DMDPs with state-action pairs and discounted factor . Firstly, we provide an impossibility result that, without prior knowledge of the prediction accuracy, no sampling policy can compute an -optimal policy with a sample complexity bound better than , which matches the state-of-the-art minimax sample complexity bound with no prediction. In complement, we propose an algorithm based on minimax optimization techniques that leverages the prediction on the transition matrix. Our algorithm achieves a sample complexity bound depending on the prediction error, and the bound is uniformly better than , the previous best result derived from convex optimization methods. These theoretical findings are further supported by our numerical experiments.
View on arXiv@article{lyu2025_2502.15345, title={ Efficiently Solving Discounted MDPs with Predictions on Transition Matrices }, author={ Lixing Lyu and Jiashuo Jiang and Wang Chi Cheung }, journal={arXiv preprint arXiv:2502.15345}, year={ 2025 } }