ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.00042
  4. Cited By
A study on the plasticity of neural networks

A study on the plasticity of neural networks

31 May 2021
Tudor Berariu
Wojciech M. Czarnecki
Soham De
J. Bornschein
Samuel L. Smith
Razvan Pascanu
Claudia Clopath
    CLL
    AI4CE
ArXivPDFHTML

Papers citing "A study on the plasticity of neural networks"

7 / 7 papers shown
Title
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Breaking the Reclustering Barrier in Centroid-based Deep Clustering
Lukas Miklautz
Timo Klein
Kevin Sidak
Collin Leiber
Thomas Lang
Andrii Shkabrii
Sebastian Tschiatschek
Claudia Plant
34
0
0
04 Nov 2024
Neuroplastic Expansion in Deep Reinforcement Learning
Neuroplastic Expansion in Deep Reinforcement Learning
Jiashun Liu
J. Obando-Ceron
Aaron C. Courville
L. Pan
39
3
0
10 Oct 2024
Normalization and effective learning rates in reinforcement learning
Normalization and effective learning rates in reinforcement learning
Clare Lyle
Zeyu Zheng
Khimya Khetarpal
James Martens
H. V. Hasselt
Razvan Pascanu
Will Dabney
19
7
0
01 Jul 2024
Directions of Curvature as an Explanation for Loss of Plasticity
Directions of Curvature as an Explanation for Loss of Plasticity
Alex Lewandowski
Haruto Tanaka
Dale Schuurmans
Marlos C. Machado
11
5
0
30 Nov 2023
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Ghada Sokar
Rishabh Agarwal
P. S. Castro
Utku Evci
CLL
40
88
0
24 Feb 2023
Does Optimal Source Task Performance Imply Optimal Pre-training for a
  Target Task?
Does Optimal Source Task Performance Imply Optimal Pre-training for a Target Task?
Steven Gutstein
Brent Lance
Sanjay Shakkottai
27
1
0
21 Jun 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
1