ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.04895
33
161

Policy Finetuning: Bridging Sample-Efficient Offline and Online Reinforcement Learning

9 June 2021
Tengyang Xie
Nan Jiang
Huan Wang
Caiming Xiong
Yu Bai
    OffRL
    OnRL
ArXivPDFHTML
Abstract

Recent theoretical work studies sample-efficient reinforcement learning (RL) extensively in two settings: learning interactively in the environment (online RL), or learning from an offline dataset (offline RL). However, existing algorithms and theories for learning near-optimal policies in these two settings are rather different and disconnected. Towards bridging this gap, this paper initiates the theoretical study of policy finetuning, that is, online RL where the learner has additional access to a "reference policy" μ\muμ close to the optimal policy π⋆\pi_\starπ⋆​ in a certain sense. We consider the policy finetuning problem in episodic Markov Decision Processes (MDPs) with SSS states, AAA actions, and horizon length HHH. We first design a sharp offline reduction algorithm -- which simply executes μ\muμ and runs offline policy optimization on the collected dataset -- that finds an ε\varepsilonε near-optimal policy within O~(H3SC⋆/ε2)\widetilde{O}(H^3SC^\star/\varepsilon^2)O(H3SC⋆/ε2) episodes, where C⋆C^\starC⋆ is the single-policy concentrability coefficient between μ\muμ and π⋆\pi_\starπ⋆​. This offline result is the first that matches the sample complexity lower bound in this setting, and resolves a recent open question in offline RL. We then establish an Ω(H3Smin⁡{C⋆,A}/ε2)\Omega(H^3S\min\{C^\star, A\}/\varepsilon^2)Ω(H3Smin{C⋆,A}/ε2) sample complexity lower bound for any policy finetuning algorithm, including those that can adaptively explore the environment. This implies that -- perhaps surprisingly -- the optimal policy finetuning algorithm is either offline reduction or a purely online RL algorithm that does not use μ\muμ. Finally, we design a new hybrid offline/online algorithm for policy finetuning that achieves better sample complexity than both vanilla offline reduction and purely online RL algorithms, in a relaxed setting where μ\muμ only satisfies concentrability partially up to a certain time step.

View on arXiv
Comments on this paper