ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.01207
17
31

fff-GAIL: Learning fff-Divergence for Generative Adversarial Imitation Learning

2 October 2020
Xin Zhang
Yanhua Li
Ziming Zhang
Zhi-Li Zhang
ArXivPDFHTML
Abstract

Imitation learning (IL) aims to learn a policy from expert demonstrations that minimizes the discrepancy between the learner and expert behaviors. Various imitation learning algorithms have been proposed with different pre-determined divergences to quantify the discrepancy. This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency? In this work, we propose fff-GAIL, a new generative adversarial imitation learning (GAIL) model, that automatically learns a discrepancy measure from the fff-divergence family as well as a policy capable of producing expert-like behaviors. Compared with IL baselines with various predefined divergence measures, fff-GAIL learns better policies with higher data efficiency in six physics-based control tasks.

View on arXiv
Comments on this paper