Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.10711
Cited By
PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning
19 June 2023
Hojoon Lee
Hanseul Cho
Hyunseung Kim
Daehoon Gwak
Joonkee Kim
Jaegul Choo
Se-Young Yun
Chulhee Yun
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"PLASTIC: Improving Input and Label Plasticity for Sample Efficient Reinforcement Learning"
31 / 31 papers shown
Title
Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning
Mingqi Yuan
Qi Wang
Guozheng Ma
Bo-wen Li
Xin Jin
Yunbo Wang
Xiaokang Yang
Wenjun Zeng
D. Tao
OffRL
AI4CE
30
0
0
24 Apr 2025
A Champion-level Vision-based Reinforcement Learning Agent for Competitive Racing in Gran Turismo 7
Hojoon Lee
Takuma Seno
Jun Jet Tai
K. Subramanian
Kenta Kawamoto
Peter Stone
Peter R. Wurman
17
0
0
12 Apr 2025
Understanding Flatness in Generative Models: Its Role and Benefits
Taehwan Lee
Kyeongkook Seo
Jaejun Yoo
Sung Whan Yoon
DiffM
51
0
0
14 Mar 2025
Hyperspherical Normalization for Scalable Deep Reinforcement Learning
Hojoon Lee
Youngdo Lee
Takuma Seno
Donghu Kim
Peter Stone
Jaegul Choo
63
1
0
24 Feb 2025
Activation by Interval-wise Dropout: A Simple Way to Prevent Neural Networks from Plasticity Loss
Sangyeon Park
Isaac Han
Seungwon Oh
Kyung-Joong Kim
54
2
0
03 Feb 2025
Reward Fine-Tuning Two-Step Diffusion Models via Learning Differentiable Latent-Space Surrogate Reward
Zhiwei Jia
Yuesong Nan
Huixi Zhao
Gengdai Liu
EGVM
80
0
0
22 Nov 2024
DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of Plasticity
Baekrok Shin
Junsoo Oh
Hanseul Cho
Chulhee Yun
AI4CE
33
1
0
30 Oct 2024
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning
Hojoon Lee
Dongyoon Hwang
Donghu Kim
Hyunseung Kim
Jun Jet Tai
K. Subramanian
Peter R. Wurman
Jaegul Choo
Peter Stone
Takuma Seno
OffRL
49
6
0
13 Oct 2024
MAD-TD: Model-Augmented Data stabilizes High Update Ratio RL
C. Voelcker
Marcel Hussing
Eric Eaton
Amir-massoud Farahmand
Igor Gilitschenski
36
1
0
11 Oct 2024
Neuroplastic Expansion in Deep Reinforcement Learning
Jiashun Liu
J. Obando-Ceron
Aaron C. Courville
L. Pan
31
3
0
10 Oct 2024
Can Learned Optimization Make Reinforcement Learning Less Difficult?
Alexander David Goldie
Chris Xiaoxuan Lu
Matthew Jackson
Shimon Whiteson
Jakob N. Foerster
32
3
0
09 Jul 2024
Normalization and effective learning rates in reinforcement learning
Clare Lyle
Zeyu Zheng
Khimya Khetarpal
James Martens
H. V. Hasselt
Razvan Pascanu
Will Dabney
14
7
0
01 Jul 2024
Investigating Pre-Training Objectives for Generalization in Vision-Based Reinforcement Learning
Donghu Kim
Hojoon Lee
Kyungmin Lee
Dongyoon Hwang
Jaegul Choo
OffRL
14
1
0
10 Jun 2024
A Study of Plasticity Loss in On-Policy Deep Reinforcement Learning
Arthur Juliani
Jordan T. Ash
OffRL
OnRL
CLL
28
0
0
29 May 2024
Diverse Feature Learning by Self-distillation and Reset
Sejik Park
CLL
24
1
0
29 Mar 2024
Reset & Distill: A Recipe for Overcoming Negative Transfer in Continual Reinforcement Learning
Hongjoon Ahn
Jinu Hyeon
Youngmin Oh
Bosun Hwang
Taesup Moon
CLL
OnRL
16
2
0
08 Mar 2024
A Case for Validation Buffer in Pessimistic Actor-Critic
Michal Nauman
M. Ostaszewski
Marek Cygan
27
0
0
01 Mar 2024
Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning
Michal Nauman
Michal Bortkiewicz
Piotr Milo's
Tomasz Trzciñski
M. Ostaszewski
Marek Cygan
OffRL
17
16
0
01 Mar 2024
Disentangling the Causes of Plasticity Loss in Neural Networks
Clare Lyle
Zeyu Zheng
Khimya Khetarpal
H. V. Hasselt
Razvan Pascanu
James Martens
Will Dabney
AI4CE
45
30
0
29 Feb 2024
In value-based deep reinforcement learning, a pruned network is a good network
J. Obando-Ceron
Aaron C. Courville
Pablo Samuel Castro
OffRL
25
18
0
19 Feb 2024
DistiLLM: Towards Streamlined Distillation for Large Language Models
Jongwoo Ko
Sungnyun Kim
Tianyi Chen
SeYoung Yun
41
25
0
06 Feb 2024
Efficient Sparse-Reward Goal-Conditioned Reinforcement Learning with a High Replay Ratio and Regularization
Takuya Hiraoka
OffRL
12
1
0
10 Dec 2023
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
Guozheng Ma
Lu Li
Sen Zhang
Zixuan Liu
Zhen Wang
Yixin Chen
Li Shen
Xueqian Wang
Dacheng Tao
OffRL
35
14
0
11 Oct 2023
Practical Sharpness-Aware Minimization Cannot Converge All the Way to Optima
Dongkuk Si
Chulhee Yun
26
15
0
16 Jun 2023
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models
Clara Na
Sanket Vaibhav Mehta
Emma Strubell
51
19
0
25 May 2022
The Primacy Bias in Deep Reinforcement Learning
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Aaron C. Courville
OnRL
83
178
0
16 May 2022
Sharpness-Aware Minimization Improves Language Model Generalization
Dara Bahri
H. Mobahi
Yi Tay
113
82
0
16 Oct 2021
Improving Generalization in Reinforcement Learning with Mixture Regularization
Kaixin Wang
Bingyi Kang
Jie Shao
Jiashi Feng
104
113
0
21 Oct 2020
Decoupling Representation Learning from Reinforcement Learning
Adam Stooke
Kimin Lee
Pieter Abbeel
Michael Laskin
SSL
DRL
255
337
0
14 Sep 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
321
1,662
0
04 May 2020
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,696
0
15 Sep 2016
1