ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.06325
  4. Cited By
Continual Backprop: Stochastic Gradient Descent with Persistent
  Randomness

Continual Backprop: Stochastic Gradient Descent with Persistent Randomness

13 August 2021
Shibhansh Dohare
R. Sutton
A. R. Mahmood
    CLL
ArXivPDFHTML

Papers citing "Continual Backprop: Stochastic Gradient Descent with Persistent Randomness"

8 / 58 papers shown
Title
Continual task learning in natural and artificial agents
Continual task learning in natural and artificial agents
Timo Flesch
Andrew M. Saxe
Christopher Summerfield
CLL
35
24
0
10 Oct 2022
The Primacy Bias in Deep Reinforcement Learning
The Primacy Bias in Deep Reinforcement Learning
Evgenii Nikishin
Max Schwarzer
P. DÓro
Pierre-Luc Bacon
Aaron C. Courville
OnRL
90
178
0
16 May 2022
Learning Agent State Online with Recurrent Generate-and-Test
Learning Agent State Online with Recurrent Generate-and-Test
Alireza Samani
R. Sutton
CLL
OffRL
6
2
0
30 Dec 2021
Training for the Future: A Simple Gradient Interpolation Loss to
  Generalize Along Time
Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Anshul Nasery
Soumyadeep Thakur
Vihari Piratla
A. De
Sunita Sarawagi
AI4TS
19
25
0
15 Aug 2021
A study on the plasticity of neural networks
A study on the plasticity of neural networks
Tudor Berariu
Wojciech M. Czarnecki
Soham De
J. Bornschein
Samuel L. Smith
Razvan Pascanu
Claudia Clopath
CLL
AI4CE
12
30
0
31 May 2021
Adaptive Rational Activations to Boost Deep Reinforcement Learning
Adaptive Rational Activations to Boost Deep Reinforcement Learning
Quentin Delfosse
P. Schramowski
Martin Mundt
Alejandro Molina
Kristian Kersting
37
14
0
18 Feb 2021
Towards Continual Reinforcement Learning: A Review and Perspectives
Towards Continual Reinforcement Learning: A Review and Perspectives
Khimya Khetarpal
Matthew D Riemer
Irina Rish
Doina Precup
CLL
OffRL
36
307
0
25 Dec 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
317
11,681
0
09 Mar 2017
Previous
12