ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.06811
  4. Cited By
Learning Continually by Spectral Regularization

Learning Continually by Spectral Regularization

10 June 2024
Alex Lewandowski
Saurabh Kumar
Dale Schuurmans
András Gyorgy
Marlos C. Machado
    CLL
ArXivPDFHTML

Papers citing "Learning Continually by Spectral Regularization"

8 / 8 papers shown
Title
Computational Analysis of Yaredawi YeZema Silt in Ethiopian Orthodox
  Tewahedo Church Chants
Computational Analysis of Yaredawi YeZema Silt in Ethiopian Orthodox Tewahedo Church Chants
Mequanent Argaw Muluneh
Yan-Tsung Peng
Li Su
38
0
0
25 Dec 2024
Plastic Learning with Deep Fourier Features
Plastic Learning with Deep Fourier Features
Alex Lewandowski
Dale Schuurmans
Marlos C. Machado
CLL
42
2
0
27 Oct 2024
Streaming Deep Reinforcement Learning Finally Works
Streaming Deep Reinforcement Learning Finally Works
Mohamed Elsayed
G. Vasan
A. R. Mahmood
OffRL
35
4
0
18 Oct 2024
Disentangling the Causes of Plasticity Loss in Neural Networks
Disentangling the Causes of Plasticity Loss in Neural Networks
Clare Lyle
Zeyu Zheng
Khimya Khetarpal
H. V. Hasselt
Razvan Pascanu
James Martens
Will Dabney
AI4CE
50
30
0
29 Feb 2024
Directions of Curvature as an Explanation for Loss of Plasticity
Directions of Curvature as an Explanation for Loss of Plasticity
Alex Lewandowski
Haruto Tanaka
Dale Schuurmans
Marlos C. Machado
6
5
0
30 Nov 2023
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear
  Functions
Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions
Arthur Jacot
34
24
0
29 Sep 2022
The Primacy Bias in Deep Reinforcement Learning
The Primacy Bias in Deep Reinforcement Learning
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Aaron C. Courville
OnRL
85
178
0
16 May 2022
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train
  10,000-Layer Vanilla Convolutional Neural Networks
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao Xiao
Yasaman Bahri
Jascha Narain Sohl-Dickstein
S. Schoenholz
Jeffrey Pennington
220
330
0
14 Jun 2018
1