ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.07842
33
3

Lower Bounds for Policy Iteration on Multi-action MDPs

16 September 2020
Kumar Ashutosh
Sarthak Consul
Bhishma Dedhia
Parthasarathi Khirwadkar
Sahil Shah
Shivaram Kalyanakrishnan
ArXiv (abs)PDFHTML
Abstract

Policy Iteration (PI) is a classical family of algorithms to compute an optimal policy for any given Markov Decision Problem (MDP). The basic idea in PI is to begin with some initial policy and to repeatedly update the policy to one from an improving set, until an optimal policy is reached. Different variants of PI result from the (switching) rule used for improvement. An important theoretical question is how many iterations a specified PI variant will take to terminate as a function of the number of states nnn and the number of actions kkk in the input MDP. While there has been considerable progress towards upper-bounding this number, there are fewer results on lower bounds. In particular, existing lower bounds primarily focus on the special case of k=2k = 2k=2 actions. We devise lower bounds for k≥3k \geq 3k≥3. Our main result is that a particular variant of PI can take Ω(kn/2)\Omega(k^{n/2})Ω(kn/2) iterations to terminate. We also generalise existing constructions on 222-action MDPs to scale lower bounds by a factor of kkk for some common deterministic variants of PI, and by log⁡(k)\log(k)log(k) for corresponding randomised variants.

View on arXiv
Comments on this paper