Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.05030
Cited By
Fast Task Inference with Variational Intrinsic Successor Features
12 June 2019
S. Hansen
Will Dabney
André Barreto
T. Wiele
David Warde-Farley
Volodymyr Mnih
BDL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Fast Task Inference with Variational Intrinsic Successor Features"
50 / 111 papers shown
Title
Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels
Sai Rajeswar
Pietro Mazzaglia
Tim Verbelen
Alexandre Piché
Bart Dhoedt
Aaron C. Courville
Alexandre Lacoste
SSL
26
21
0
24 Sep 2022
An information-theoretic perspective on intrinsic motivation in reinforcement learning: a survey
A. Aubret
L. Matignon
S. Hassas
31
35
0
19 Sep 2022
Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Chenhao Li
Sebastian Blaes
Pavel Kolev
Marin Vlastelica
Jonas Frey
Georg Martius
SSL
49
29
0
16 Sep 2022
Human-level Atari 200x faster
Steven Kapturowski
Victor Campos
Ray Jiang
Nemanja Rakićević
Hado van Hasselt
Charles Blundell
Adria Puigdomenech Badia
OffRL
52
28
0
15 Sep 2022
Basis for Intentions: Efficient Inverse Reinforcement Learning using Past Experience
Marwa Abdulhai
Natasha Jaques
Sergey Levine
OffRL
19
5
0
09 Aug 2022
Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
L. N. Alegre
A. Bazzan
Bruno C. da Silva
25
26
0
22 Jun 2022
Generalised Policy Improvement with Geometric Policy Composition
S. Thakoor
Mark Rowland
Diana Borsa
Will Dabney
Rémi Munos
André Barreto
OffRL
14
7
0
17 Jun 2022
Contrastive Learning as Goal-Conditioned Reinforcement Learning
Benjamin Eysenbach
Tianjun Zhang
Ruslan Salakhutdinov
Sergey Levine
SSL
OffRL
25
138
0
15 Jun 2022
Discrete State-Action Abstraction via the Successor Representation
A. Attali
Pedro Cisneros-Velarde
M. Morales
Nancy M. Amato
OffRL
28
1
0
07 Jun 2022
First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual Information Maximization
S. Reddy
Sergey Levine
Anca Dragan
SSL
11
12
0
24 May 2022
Task Relabelling for Multi-task Transfer using Successor Features
Martin Balla
Diego Perez-Liebana
14
1
0
20 May 2022
Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL
Akram Erraqabi
Marlos C. Machado
Mingde Zhao
Sainbayar Sukhbaatar
A. Lazaric
Ludovic Denoyer
Yoshua Bengio
OffRL
28
9
0
21 Mar 2022
Perceiving the World: Question-guided Reinforcement Learning for Text-based Games
Yunqiu Xu
Meng Fang
Ling Chen
Yali Du
Joey Tianyi Zhou
Chengqi Zhang
OffRL
26
19
0
20 Mar 2022
Fast and Data Efficient Reinforcement Learning from Pixels via Non-Parametric Value Approximation
Alex Long
Alan Blair
H. V. Hoof
23
3
0
07 Mar 2022
Reward-Free Policy Space Compression for Reinforcement Learning
Mirco Mutti
Stefano Del Col
Marcello Restelli
18
3
0
22 Feb 2022
Soft Actor-Critic with Inhibitory Networks for Faster Retraining
J. Ide
Daria Mićović
Michael J. Guarino
K. Alcedo
D. Rosenbluth
Adrian P. Pope
6
3
0
07 Feb 2022
Challenging Common Assumptions in Convex Reinforcement Learning
Mirco Mutti
Ric De Santi
Piersilvio De Bartolomeis
Marcello Restelli
OffRL
24
21
0
03 Feb 2022
Lipschitz-constrained Unsupervised Skill Discovery
Seohong Park
Jongwook Choi
Jaekyeom Kim
Honglak Lee
Gunhee Kim
43
44
0
02 Feb 2022
CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Michael Laskin
Hao Liu
Xue Bin Peng
Denis Yarats
Aravind Rajeswaran
Pieter Abbeel
SSL
74
65
0
01 Feb 2022
Mask-based Latent Reconstruction for Reinforcement Learning
Tao Yu
Zhizheng Zhang
Cuiling Lan
Yan Lu
Zhibo Chen
24
44
0
28 Jan 2022
The Challenges of Exploration for Offline Reinforcement Learning
Nathan Lambert
Markus Wulfmeier
William F. Whitney
Arunkumar Byravan
Michael Bloesch
Vibhavari Dasagi
Tim Hertweck
Martin Riedmiller
OffRL
26
27
0
27 Jan 2022
A Generalized Bootstrap Target for Value-Learning, Efficiently Combining Value and Feature Predictions
Anthony GX-Chen
Veronica Chelu
Blake A. Richards
Joelle Pineau
TTA
29
1
0
05 Jan 2022
Constructing a Good Behavior Basis for Transfer using Generalized Policy Updates
Safa Alver
Doina Precup
OffRL
4
17
0
30 Dec 2021
Analysis and Prediction of NLP Models Via Task Embeddings
Damien Sileo
Marie-Francine Moens
22
3
0
10 Dec 2021
Interesting Object, Curious Agent: Learning Task-Agnostic Exploration
Simone Parisi
Victoria Dean
Deepak Pathak
Abhinav Gupta
LM&Ro
30
50
0
25 Nov 2021
Successor Feature Neural Episodic Control
David Emukpere
Xavier Alameda-Pineda
Chris Reinke
BDL
22
4
0
04 Nov 2021
Successor Feature Representations
Chris Reinke
Xavier Alameda-Pineda
23
5
0
29 Oct 2021
URLB: Unsupervised Reinforcement Learning Benchmark
Michael Laskin
Denis Yarats
Hao Liu
Kimin Lee
Albert Zhan
Kevin Lu
Catherine Cang
Lerrel Pinto
Pieter Abbeel
SSL
OffRL
30
132
0
28 Oct 2021
Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching
Pierre-Alexandre Kamienny
Jean Tarbouriech
Sylvain Lamprier
A. Lazaric
Ludovic Denoyer
SSL
36
18
0
27 Oct 2021
Dynamic Bottleneck for Robust Self-Supervised Exploration
Chenjia Bai
Lingxiao Wang
Lei Han
Animesh Garg
Jianye Hao
Peng Liu
Zhaoran Wang
19
28
0
20 Oct 2021
Temporal Abstraction in Reinforcement Learning with the Successor Representation
Marlos C. Machado
André Barreto
Doina Precup
Michael H. Bowling
16
40
0
12 Oct 2021
Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization
S. Gu
Manfred Diaz
Daniel Freeman
Hiroki Furuta
Seyed Kamyar Seyed Ghasemipour
Anton Raichuk
Byron David
Erik Frey
Erwin Coumans
Olivier Bachem
36
14
0
10 Oct 2021
The Information Geometry of Unsupervised Reinforcement Learning
Benjamin Eysenbach
Ruslan Salakhutdinov
Sergey Levine
SSL
OffRL
53
31
0
06 Oct 2021
A First-Occupancy Representation for Reinforcement Learning
Theodore H. Moskovitz
S. Wilson
M. Sahani
26
15
0
28 Sep 2021
Dynamics-Aware Quality-Diversity for Efficient Learning of Skill Repertoires
Bryan Lim
Luca Grillotti
Lorenzo Bernasconi
Antoine Cully
66
28
0
16 Sep 2021
APS: Active Pretraining with Successor Features
Hao Liu
Pieter Abbeel
31
118
0
31 Aug 2021
Deep Reinforcement Learning at the Edge of the Statistical Precipice
Rishabh Agarwal
Max Schwarzer
P. S. Castro
Aaron Courville
Marc G. Bellemare
OffRL
41
633
0
30 Aug 2021
Learning more skills through optimistic exploration
D. Strouse
Kate Baumli
David Warde-Farley
Vlad Mnih
S. Hansen
SSL
11
45
0
29 Jul 2021
Explore and Control with Adversarial Surprise
Arnaud Fickinger
Natasha Jaques
Samyak Parajuli
Michael Chang
Nicholas Rhinehart
Glen Berseth
Stuart J. Russell
Sergey Levine
32
8
0
12 Jul 2021
Pretrained Encoders are All You Need
Mina Khan
P. Srivatsa
Advait Rane
Shriram Chenniappa
Rishabh Anand
Sherjil Ozair
Pattie Maes
SSL
VLM
23
6
0
09 Jun 2021
Pretraining Representations for Data-Efficient Reinforcement Learning
Max Schwarzer
Nitarshan Rajkumar
Michael Noukhovitch
Ankesh Anand
Laurent Charlin
Devon Hjelm
Philip Bachman
Aaron Courville
OffRL
39
114
0
09 Jun 2021
DisTop: Discovering a Topological representation to learn diverse and rewarding skills
A. Aubret
L. Matignon
S. Hassas
9
8
0
06 Jun 2021
Variational Empowerment as Representation Learning for Goal-Based Reinforcement Learning
Jongwook Choi
Archit Sharma
Honglak Lee
Sergey Levine
S. Gu
DRL
17
21
0
02 Jun 2021
Discovering Diverse Nearly Optimal Policies with Successor Features
Tom Zahavy
Brendan O'Donoghue
André Barreto
Volodymyr Mnih
Sebastian Flennerhag
Satinder Singh
20
20
0
01 Jun 2021
Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning
Hiroki Furuta
T. Matsushima
Tadashi Kozuno
Y. Matsuo
Sergey Levine
Ofir Nachum
S. Gu
OffRL
6
13
0
23 Mar 2021
Learning One Representation to Optimize All Rewards
Ahmed Touati
Yann Ollivier
OffRL
21
60
0
14 Mar 2021
Behavior From the Void: Unsupervised Active Pre-Training
Hao Liu
Pieter Abbeel
VLM
SSL
34
194
0
08 Mar 2021
Successor Feature Sets: Generalizing Successor Representations Across Policies
Kianté Brantley
Soroush Mehri
Geoffrey J. Gordon
OffRL
20
11
0
03 Mar 2021
PsiPhi-Learning: Reinforcement Learning with Demonstrations using Successor Features and Inverse Temporal Difference Learning
Angelos Filos
Clare Lyle
Y. Gal
Sergey Levine
Natasha Jaques
Gregory Farquhar
15
22
0
24 Feb 2021
Beyond Fine-Tuning: Transferring Behavior in Reinforcement Learning
Victor Campos
Pablo Sprechmann
S. Hansen
André Barreto
Steven Kapturowski
Alex Vitvitskyi
Adria Puigdomenech Badia
Charles Blundell
OffRL
OnRL
28
26
0
24 Feb 2021
Previous
1
2
3
Next