ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.13956
  4. Cited By
APS: Active Pretraining with Successor Features

APS: Active Pretraining with Successor Features

31 August 2021
Hao Liu
Pieter Abbeel
ArXivPDFHTML

Papers citing "APS: Active Pretraining with Successor Features"

31 / 81 papers shown
Title
Simple Emergent Action Representations from Multi-Task Policy Training
Simple Emergent Action Representations from Multi-Task Policy Training
Pu Hua
Yubei Chen
Huazhe Xu
MLAU
12
6
0
18 Oct 2022
Skill-Based Reinforcement Learning with Intrinsic Reward Matching
Skill-Based Reinforcement Learning with Intrinsic Reward Matching
Ademi Adeniji
Amber Xie
Pieter Abbeel
OffRL
17
5
0
14 Oct 2022
A Mixture of Surprises for Unsupervised Reinforcement Learning
A Mixture of Surprises for Unsupervised Reinforcement Learning
Andrew Zhao
Matthieu Lin
Yangguang Li
Y. Liu
Gao Huang
23
12
0
13 Oct 2022
A Comprehensive Survey of Data Augmentation in Visual Reinforcement
  Learning
A Comprehensive Survey of Data Augmentation in Visual Reinforcement Learning
Guozheng Ma
Zhen Wang
Zhecheng Yuan
Xueqian Wang
Bo Yuan
Dacheng Tao
OffRL
25
26
0
10 Oct 2022
EUCLID: Towards Efficient Unsupervised Reinforcement Learning with
  Multi-choice Dynamics Model
EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model
Yifu Yuan
Jianye Hao
Fei Ni
Yao Mu
Yan Zheng
Yujing Hu
Jinyi Liu
Yingfeng Chen
Changjie Fan
71
12
0
02 Oct 2022
Does Zero-Shot Reinforcement Learning Exist?
Does Zero-Shot Reinforcement Learning Exist?
Ahmed Touati
Jérémy Rapin
Yann Ollivier
OffRL
20
38
0
29 Sep 2022
Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels
Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels
Sai Rajeswar
Pietro Mazzaglia
Tim Verbelen
Alexandre Piché
Bart Dhoedt
Aaron C. Courville
Alexandre Lacoste
SSL
16
21
0
24 Sep 2022
Continuous MDP Homomorphisms and Homomorphic Policy Gradient
Continuous MDP Homomorphisms and Homomorphic Policy Gradient
S. Rezaei-Shoshtari
Rosie Zhao
Prakash Panangaden
D. Meger
Doina Precup
31
18
0
15 Sep 2022
Cell-Free Latent Go-Explore
Cell-Free Latent Go-Explore
Quentin Gallouedec
Emmanuel Dellandréa
4
1
0
31 Aug 2022
Self-Supervised Exploration via Temporal Inconsistency in Reinforcement
  Learning
Self-Supervised Exploration via Temporal Inconsistency in Reinforcement Learning
Zijian Gao
Kele Xu
Yuanzhao Zhai
Dawei Feng
Bo Ding
Xinjun Mao
Huaimin Wang
17
1
0
24 Aug 2022
Dynamic Memory-based Curiosity: A Bootstrap Approach for Exploration
Dynamic Memory-based Curiosity: A Bootstrap Approach for Exploration
Zijian Gao
Yiying Li
Kele Xu
Yuanzhao Zhai
Dawei Feng
Bo Ding
Xinjun Mao
Huaimin Wang
22
0
0
24 Aug 2022
Optimistic Linear Support and Successor Features as a Basis for Optimal
  Policy Transfer
Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
L. N. Alegre
A. Bazzan
Bruno C. da Silva
14
26
0
22 Jun 2022
Contrastive Learning as Goal-Conditioned Reinforcement Learning
Contrastive Learning as Goal-Conditioned Reinforcement Learning
Benjamin Eysenbach
Tianjun Zhang
Ruslan Salakhutdinov
Sergey Levine
SSL
OffRL
23
137
0
15 Jun 2022
k-Means Maximum Entropy Exploration
k-Means Maximum Entropy Exploration
Alexander Nedergaard
Matthew Cook
9
12
0
31 May 2022
POLTER: Policy Trajectory Ensemble Regularization for Unsupervised
  Reinforcement Learning
POLTER: Policy Trajectory Ensemble Regularization for Unsupervised Reinforcement Learning
Frederik Schubert
C. Benjamins
Sebastian Dohler
Bodo Rosenhahn
Marius Lindauer
SSL
OffRL
54
4
0
23 May 2022
Nuclear Norm Maximization Based Curiosity-Driven Learning
Nuclear Norm Maximization Based Curiosity-Driven Learning
Chao Chen
Zijian Gao
Kele Xu
Sen Yang
Yiying Li
Bo Ding
Dawei Feng
Huaimin Wang
89
5
0
21 May 2022
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
  Simulated Characters
ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters
Xue Bin Peng
Yunrong Guo
L. Halper
Sergey Levine
Sanja Fidler
18
15
0
04 May 2022
The Importance of Non-Markovianity in Maximum State Entropy Exploration
The Importance of Non-Markovianity in Maximum State Entropy Exploration
Mirco Mutti
Ric De Santi
Marcello Restelli
22
31
0
07 Feb 2022
Challenging Common Assumptions in Convex Reinforcement Learning
Challenging Common Assumptions in Convex Reinforcement Learning
Mirco Mutti
Ric De Santi
Piersilvio De Bartolomeis
Marcello Restelli
OffRL
18
21
0
03 Feb 2022
Lipschitz-constrained Unsupervised Skill Discovery
Lipschitz-constrained Unsupervised Skill Discovery
Seohong Park
Jongwook Choi
Jaekyeom Kim
Honglak Lee
Gunhee Kim
41
44
0
02 Feb 2022
CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Michael Laskin
Hao Liu
Xue Bin Peng
Denis Yarats
Aravind Rajeswaran
Pieter Abbeel
SSL
74
65
0
01 Feb 2022
Don't Change the Algorithm, Change the Data: Exploratory Data for
  Offline Reinforcement Learning
Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Denis Yarats
David Brandfonbrener
Hao Liu
Michael Laskin
Pieter Abbeel
A. Lazaric
Lerrel Pinto
OffRL
OnRL
11
84
0
31 Jan 2022
Mask-based Latent Reconstruction for Reinforcement Learning
Mask-based Latent Reconstruction for Reinforcement Learning
Tao Yu
Zhizheng Zhang
Cuiling Lan
Yan Lu
Zhibo Chen
17
44
0
28 Jan 2022
Unsupervised Reinforcement Learning in Multiple Environments
Unsupervised Reinforcement Learning in Multiple Environments
Mirco Mutti
Mattia Mancassola
Marcello Restelli
OffRL
14
25
0
16 Dec 2021
URLB: Unsupervised Reinforcement Learning Benchmark
URLB: Unsupervised Reinforcement Learning Benchmark
Michael Laskin
Denis Yarats
Hao Liu
Kimin Lee
Albert Zhan
Kevin Lu
Catherine Cang
Lerrel Pinto
Pieter Abbeel
SSL
OffRL
30
132
0
28 Oct 2021
Direct then Diffuse: Incremental Unsupervised Skill Discovery for State
  Covering and Goal Reaching
Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching
Pierre-Alexandre Kamienny
Jean Tarbouriech
Sylvain Lamprier
A. Lazaric
Ludovic Denoyer
SSL
31
18
0
27 Oct 2021
Dynamic Bottleneck for Robust Self-Supervised Exploration
Dynamic Bottleneck for Robust Self-Supervised Exploration
Chenjia Bai
Lingxiao Wang
Lei Han
Animesh Garg
Jianye Hao
Peng Liu
Zhaoran Wang
11
28
0
20 Oct 2021
Temporal Abstraction in Reinforcement Learning with the Successor
  Representation
Temporal Abstraction in Reinforcement Learning with the Successor Representation
Marlos C. Machado
André Barreto
Doina Precup
Michael H. Bowling
8
40
0
12 Oct 2021
A First-Occupancy Representation for Reinforcement Learning
A First-Occupancy Representation for Reinforcement Learning
Theodore H. Moskovitz
S. Wilson
M. Sahani
19
15
0
28 Sep 2021
Deep Reinforcement Learning at the Edge of the Statistical Precipice
Deep Reinforcement Learning at the Edge of the Statistical Precipice
Rishabh Agarwal
Max Schwarzer
P. S. Castro
Aaron Courville
Marc G. Bellemare
OffRL
25
630
0
30 Aug 2021
Behavior From the Void: Unsupervised Active Pre-Training
Behavior From the Void: Unsupervised Active Pre-Training
Hao Liu
Pieter Abbeel
VLM
SSL
28
194
0
08 Mar 2021
Previous
12