ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.03534
11
1

PcLast: Discovering Plannable Continuous Latent States

6 November 2023
Anurag Koul
Shivakanth Sujit
Shaoru Chen
Ben Evans
Lili Wu
Byron Xu
Rajan Chari
Riashat Islam
Raihan Seraj
Yonathan Efroni
Lekan Molu
Miro Dudik
John Langford
Alex Lamb
    OffRL
    BDL
ArXivPDFHTML
Abstract

Goal-conditioned planning benefits from learned low-dimensional representations of rich observations. While compact latent representations typically learned from variational autoencoders or inverse dynamics enable goal-conditioned decision making, they ignore state reachability, hampering their performance. In this paper, we learn a representation that associates reachable states together for effective planning and goal-conditioned policy learning. We first learn a latent representation with multi-step inverse dynamics (to remove distracting information), and then transform this representation to associate reachable states together in ℓ2\ell_2ℓ2​ space. Our proposals are rigorously tested in various simulation testbeds. Numerical results in reward-based settings show significant improvements in sampling efficiency. Further, in reward-free settings this approach yields layered state abstractions that enable computationally efficient hierarchical planning for reaching ad hoc goals with zero additional samples.

View on arXiv
Comments on this paper