ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.07994
23
3

Neuroplastic Expansion in Deep Reinforcement Learning

10 October 2024
Jiashun Liu
J. Obando-Ceron
Aaron C. Courville
L. Pan
ArXivPDFHTML
Abstract

The loss of plasticity in learning agents, analogous to the solidification of neural pathways in biological brains, significantly impedes learning and adaptation in reinforcement learning due to its non-stationary nature. To address this fundamental challenge, we propose a novel approach, {\it Neuroplastic Expansion} (NE), inspired by cortical expansion in cognitive science. NE maintains learnability and adaptability throughout the entire training process by dynamically growing the network from a smaller initial size to its full dimension. Our method is designed with three key components: (\textit{1}) elastic topology generation based on potential gradients, (\textit{2}) dormant neuron pruning to optimize network expressivity, and (\textit{3}) neuron consolidation via experience review to strike a balance in the plasticity-stability dilemma. Extensive experiments demonstrate that NE effectively mitigates plasticity loss and outperforms state-of-the-art methods across various tasks in MuJoCo and DeepMind Control Suite environments. NE enables more adaptive learning in complex, dynamic environments, which represents a crucial step towards transitioning deep reinforcement learning from static, one-time training paradigms to more flexible, continually adapting models.

View on arXiv
@article{liu2025_2410.07994,
  title={ Neuroplastic Expansion in Deep Reinforcement Learning },
  author={ Jiashun Liu and Johan Obando-Ceron and Aaron Courville and Ling Pan },
  journal={arXiv preprint arXiv:2410.07994},
  year={ 2025 }
}
Comments on this paper