ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.05990
10
82

Toward the Fundamental Limits of Imitation Learning

13 September 2020
Nived Rajaraman
Lin F. Yang
Jiantao Jiao
K. Ramachandran
ArXivPDFHTML
Abstract

Imitation learning (IL) aims to mimic the behavior of an expert policy in a sequential decision-making problem given only demonstrations. In this paper, we focus on understanding the minimax statistical limits of IL in episodic Markov Decision Processes (MDPs). We first consider the setting where the learner is provided a dataset of NNN expert trajectories ahead of time, and cannot interact with the MDP. Here, we show that the policy which mimics the expert whenever possible is in expectation ≲∣S∣H2log⁡(N)N\lesssim \frac{|\mathcal{S}| H^2 \log (N)}{N}≲N∣S∣H2log(N)​ suboptimal compared to the value of the expert, even when the expert follows an arbitrary stochastic policy. Here S\mathcal{S}S is the state space, and HHH is the length of the episode. Furthermore, we establish a suboptimality lower bound of ≳∣S∣H2/N\gtrsim |\mathcal{S}| H^2 / N≳∣S∣H2/N which applies even if the expert is constrained to be deterministic, or if the learner is allowed to actively query the expert at visited states while interacting with the MDP for NNN episodes. To our knowledge, this is the first algorithm with suboptimality having no dependence on the number of actions, under no additional assumptions. We then propose a novel algorithm based on minimum-distance functionals in the setting where the transition model is given and the expert is deterministic. The algorithm is suboptimal by ≲min⁡{H∣S∣/N, ∣S∣H3/2/N}\lesssim \min \{ H \sqrt{|\mathcal{S}| / N} ,\ |\mathcal{S}| H^{3/2} / N \}≲min{H∣S∣/N​, ∣S∣H3/2/N}, showing that knowledge of transition improves the minimax rate by at least a H\sqrt{H}H​ factor.

View on arXiv
Comments on this paper