ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.04850
20
78

Dueling RL: Reinforcement Learning with Trajectory Preferences

8 November 2021
Aldo Pacchiano
Aadirupa Saha
Jonathan Lee
ArXivPDFHTML
Abstract

We consider the problem of preference based reinforcement learning (PbRL), where, unlike traditional reinforcement learning, an agent receives feedback only in terms of a 1 bit (0/1) preference over a trajectory pair instead of absolute rewards for them. The success of the traditional RL framework crucially relies on the underlying agent-reward model, which, however, depends on how accurately a system designer can express an appropriate reward function and often a non-trivial task. The main novelty of our framework is the ability to learn from preference-based trajectory feedback that eliminates the need to hand-craft numeric reward models. This paper sets up a formal framework for the PbRL problem with non-markovian rewards, where the trajectory preferences are encoded by a generalized linear model of dimension ddd. Assuming the transition model is known, we then propose an algorithm with almost optimal regret guarantee of O~(SHdlog⁡(T/δ)T)\tilde {\mathcal{O}}\left( SH d \log (T / \delta) \sqrt{T} \right)O~(SHdlog(T/δ)T​). We further, extend the above algorithm to the case of unknown transition dynamics, and provide an algorithm with near optimal regret guarantee O~((d+H2+∣S∣)dT+∣S∣∣A∣TH)\widetilde{\mathcal{O}}((\sqrt{d} + H^2 + |\mathcal{S}|)\sqrt{dT} +\sqrt{|\mathcal{S}||\mathcal{A}|TH} )O((d​+H2+∣S∣)dT​+∣S∣∣A∣TH​). To the best of our knowledge, our work is one of the first to give tight regret guarantees for preference based RL problems with trajectory preferences.

View on arXiv
Comments on this paper