ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.08813
27
2

Model approximation in MDPs with unbounded per-step cost

13 February 2024
Berk Bozkurt
Aditya Mahajan
A. Nayyar
Ouyang Yi
ArXiv (abs)PDFHTML
Abstract

We consider the problem of designing a control policy for an infinite-horizon discounted cost Markov decision process M\mathcal{M}M when we only have access to an approximate model M^\hat{\mathcal{M}}M^. How well does an optimal policy π^⋆\hat{\pi}^{\star}π^⋆ of the approximate model perform when used in the original model M\mathcal{M}M? We answer this question by bounding a weighted norm of the difference between the value function of π^⋆\hat{\pi}^\star π^⋆ when used in M\mathcal{M}M and the optimal value function of M\mathcal{M}M. We then extend our results and obtain potentially tighter upper bounds by considering affine transformations of the per-step cost. We further provide upper bounds that explicitly depend on the weighted distance between cost functions and weighted distance between transition kernels of the original and approximate models. We present examples to illustrate our results.

View on arXiv
Comments on this paper