ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1311.3959
44
22

Clustering Markov Decision Processes For Continual Transfer

15 November 2013
M. H. Mahmud
Majd Hawasly
Benjamin Rosman
S. Ramamoorthy
    OffRL
ArXivPDFHTML
Abstract

We present algorithms to effectively represent a set of Markov decision processes (MDPs), whose optimal policies have already been learned, by a smaller source subset for lifelong, policy-reuse-based transfer learning in reinforcement learning. This is necessary when the number of previous tasks is large and the cost of measuring similarity counteracts the benefit of transfer. The source subset forms an `ϵ\epsilonϵ-net' over the original set of MDPs, in the sense that for each previous MDP MpM_pMp​, there is a source MsM^sMs whose optimal policy has <ϵ<\epsilon<ϵ regret in MpM_pMp​. Our contributions are as follows. We present EXP-3-Transfer, a principled policy-reuse algorithm that optimally reuses a given source policy set when learning for a new MDP. We present a framework to cluster the previous MDPs to extract a source subset. The framework consists of (i) a distance dVd_VdV​ over MDPs to measure policy-based similarity between MDPs; (ii) a cost function g(⋅)g(\cdot)g(⋅) that uses dVd_VdV​ to measure how good a particular clustering is for generating useful source tasks for EXP-3-Transfer and (iii) a provably convergent algorithm, MHAV, for finding the optimal clustering. We validate our algorithms through experiments in a surveillance domain.

View on arXiv
Comments on this paper