Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.13167
Cited By
Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization
24 March 2022
Francesco Pelosin
Saurav Jha
A. Torsello
Bogdan Raducanu
Joost van de Weijer
CLL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization"
5 / 5 papers shown
Title
Continual Named Entity Recognition without Catastrophic Forgetting
Duzhen Zhang
Wei Cong
Jiahua Dong
Yahan Yu
Xiuyi Chen
Yonggang Zhang
Zhen Fang
23
10
0
23 Oct 2023
Task-Attentive Transformer Architecture for Continual Learning of Vision-and-Language Tasks Using Knowledge Distillation
Yuliang Cai
Jesse Thomason
Mohammad Rostami
VLM
CLL
19
11
0
25 Mar 2023
The Neural Process Family: Survey, Applications and Perspectives
Saurav Jha
Dong Gong
Xuesong Wang
Richard E. Turner
L. Yao
BDL
63
24
0
01 Sep 2022
Architecture Matters in Continual Learning
Seyed Iman Mirzadeh
Arslan Chaudhry
Dong Yin
Timothy Nguyen
Razvan Pascanu
Dilan Görür
Mehrdad Farajtabar
OOD
KELM
111
58
0
01 Feb 2022
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
238
579
0
12 Mar 2020
1