ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.01128
36
7
v1v2 (latest)

Difference of Convex Functions Programming Applied to Control with Expert Data

3 June 2016
Bilal Piot
Matthieu Geist
Olivier Pietquin
    OffRL
ArXiv (abs)PDFHTML
Abstract

This paper shows how Difference of Convex functions (DC) programming can improve the performance of some Reinforcement Learning (RL) algorithms using expert data and Learning from Demonstrations (LfD) algorithms. This is principally due to the fact that the norm of the Optimal Bellman Residual (OBR), which is one of the main component of the algorithms considered, is DC. The slight performance improvement is shown on two algorithms, namely Reward-regularized Classification for Apprenticeship Learning (RCAL) and Reinforcement Learning with Expert Demonstrations (RLED), through experiments on generic Markov Decision Processes (MDP) called Garnets.

View on arXiv
Comments on this paper