ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.05856
  4. Cited By
Online Fast Adaptation and Knowledge Accumulation: a New Approach to
  Continual Learning

Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning

12 March 2020
Massimo Caccia
Pau Rodríguez López
O. Ostapenko
Fabrice Normandin
Min-Bin Lin
Lucas Page-Caccia
I. Laradji
Irina Rish
Alexande Lacoste
David Vazquez
Laurent Charlin
    CLL
    KELM
ArXivPDFHTML

Papers citing "Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning"

10 / 10 papers shown
Title
GitChameleon: Unmasking the Version-Switching Capabilities of Code
  Generation Models
GitChameleon: Unmasking the Version-Switching Capabilities of Code Generation Models
Nizar Islah
Justine Gehring
Diganta Misra
Eilif B. Muller
Irina Rish
Terry Yue Zhuo
Massimo Caccia
SyDa
38
1
0
05 Nov 2024
Sequential Learning in the Dense Associative Memory
Sequential Learning in the Dense Associative Memory
Hayden McAlister
Anthony Robins
Lech Szymanski
CLL
98
1
0
24 Sep 2024
Building a Subspace of Policies for Scalable Continual Learning
Building a Subspace of Policies for Scalable Continual Learning
Jean-Baptiste Gaya
T. Doan
Lucas Page-Caccia
Laure Soulier
Ludovic Denoyer
Roberta Raileanu
CLL
27
29
0
18 Nov 2022
DualNet: Continual Learning, Fast and Slow
DualNet: Continual Learning, Fast and Slow
Quang-Cuong Pham
Chenghao Liu
S. Hoi
CLL
69
42
0
01 Oct 2021
Graceful Degradation and Related Fields
Graceful Degradation and Related Fields
J. Dymond
18
4
0
21 Jun 2021
Towards mental time travel: a hierarchical memory for reinforcement
  learning agents
Towards mental time travel: a hierarchical memory for reinforcement learning agents
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Andrea Banino
Felix Hill
11
47
0
28 May 2021
New Insights on Reducing Abrupt Representation Change in Online
  Continual Learning
New Insights on Reducing Abrupt Representation Change in Online Continual Learning
Lucas Page-Caccia
Rahaf Aljundi
Nader Asadi
Tinne Tuytelaars
Joelle Pineau
Eugene Belilovsky
CLL
21
185
0
11 Apr 2021
Meta Continual Learning via Dynamic Programming
Meta Continual Learning via Dynamic Programming
R. Krishnan
Prasanna Balaprakash
CLL
6
10
0
05 Aug 2020
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness
  of MAML
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
172
639
0
19 Sep 2019
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,677
0
09 Mar 2017
1