Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.05329
Cited By
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
11 October 2021
Ghada Sokar
D. Mocanu
Mykola Pechenizkiy
CLL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks"
7 / 7 papers shown
Title
Dynamic Sparse Training versus Dense Training: The Unexpected Winner in Image Corruption Robustness
Boqian Wu
Q. Xiao
Shunxin Wang
N. Strisciuglio
Mykola Pechenizkiy
M. V. Keulen
D. Mocanu
Elena Mocanu
OOD
3DH
52
0
0
03 Oct 2024
FOCIL: Finetune-and-Freeze for Online Class Incremental Learning by Training Randomly Pruned Sparse Experts
Murat Onur Yildirim
Elif Ceren Gok Yildirim
D. Mocanu
Joaquin Vanschoren
CLL
36
0
0
13 Mar 2024
Continual learning for surface defect segmentation by subnetwork creation and selection
Aleksandr Dekhovich
Miguel A. Bessa
CLL
18
3
0
08 Dec 2023
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
Murat Onur Yildirim
Elif Ceren Gok Yildirim
Ghada Sokar
D. Mocanu
Joaquin Vanschoren
CLL
20
7
0
28 Aug 2023
iPINNs: Incremental learning for Physics-informed neural networks
Aleksandr Dekhovich
M. Sluiter
David Tax
Miguel A. Bessa
AI4CE
DiffM
12
10
0
10 Apr 2023
Architecture Matters in Continual Learning
Seyed Iman Mirzadeh
Arslan Chaudhry
Dong Yin
Timothy Nguyen
Razvan Pascanu
Dilan Görür
Mehrdad Farajtabar
OOD
KELM
109
58
0
01 Feb 2022
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
139
684
0
31 Jan 2021
1