ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.09505
12
18

Estimating Q(s,s') with Deep Deterministic Dynamics Gradients

21 February 2020
Ashley D. Edwards
Himanshu Sahni
Rosanne Liu
Jane Hung
Ankit Jain
Rui Wang
Adrien Ecoffet
Thomas Miconi
Charles Isbell
J. Yosinski
    OffRL
ArXivPDFHTML
Abstract

In this paper, we introduce a novel form of value function, Q(s,s′)Q(s, s')Q(s,s′), that expresses the utility of transitioning from a state sss to a neighboring state s′s's′ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.

View on arXiv
Comments on this paper