28
0

Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning

Abstract

Developing lifelong learning agents is crucial for artificial general intelligence. However, deep reinforcement learning (RL) systems often suffer from plasticity loss, where neural networks gradually lose their ability to adapt during training. Despite its significance, this field lacks unified benchmarks and evaluation protocols. We introduce Plasticine, the first open-source framework for benchmarking plasticity optimization in deep RL. Plasticine provides single-file implementations of over 13 mitigation methods, 10 evaluation metrics, and learning scenarios with increasing non-stationarity levels from standard to open-ended environments. This framework enables researchers to systematically quantify plasticity loss, evaluate mitigation strategies, and analyze plasticity dynamics across different contexts. Our documentation, examples, and source code are available atthis https URL.

View on arXiv
@article{yuan2025_2504.17490,
  title={ Plasticine: Accelerating Research in Plasticity-Motivated Deep Reinforcement Learning },
  author={ Mingqi Yuan and Qi Wang and Guozheng Ma and Bo Li and Xin Jin and Yunbo Wang and Xiaokang Yang and Wenjun Zeng and Dacheng Tao },
  journal={arXiv preprint arXiv:2504.17490},
  year={ 2025 }
}
Comments on this paper