ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.09563
  4. Cited By
On Anytime Learning at Macroscale

On Anytime Learning at Macroscale

17 June 2021
Lucas Caccia
Jing Xu
Myle Ott
MarcÁurelio Ranzato
Ludovic Denoyer
ArXivPDFHTML

Papers citing "On Anytime Learning at Macroscale"

14 / 14 papers shown
Title
Sequence Transferability and Task Order Selection in Continual Learning
T. Nguyen
Cuong N. Nguyen
Quang Pham
B. T. Nguyen
Savitha Ramasamy
Xiaoli Li
Cuong V Nguyen
85
0
0
10 Feb 2025
Budgeted Online Continual Learning by Adaptive Layer Freezing and Frequency-based Sampling
Budgeted Online Continual Learning by Adaptive Layer Freezing and Frequency-based Sampling
Minhyuk Seo
Hyunseo Koh
Jonghyun Choi
44
1
0
19 Oct 2024
CRAFT: Contextual Re-Activation of Filters for face recognition Training
CRAFT: Contextual Re-Activation of Filters for face recognition Training
Aman Bhatta
Domingo Mery
Haiyu Wu
Kevin W. Bowyer
CVBM
33
2
0
29 Nov 2023
Chunking: Continual Learning is not just about Distribution Shift
Chunking: Continual Learning is not just about Distribution Shift
Thomas L. Lee
Amos Storkey
30
1
0
03 Oct 2023
Continual Pre-Training of Large Language Models: How to (re)warm your
  model?
Continual Pre-Training of Large Language Models: How to (re)warm your model?
Kshitij Gupta
Benjamin Thérien
Adam Ibrahim
Mats L. Richter
Quentin G. Anthony
Eugene Belilovsky
Irina Rish
Timothée Lesort
KELM
58
101
0
08 Aug 2023
Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural
  Networks
Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks
V. Ramkumar
Elahe Arani
Bahram Zonooz
MU
OnRL
CLL
44
5
0
18 Mar 2023
NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
  Research
NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research
J. Bornschein
Alexandre Galashov
Ross Hemsley
Amal Rannen-Triki
Yutian Chen
...
Angeliki Lazaridou
Yee Whye Teh
Andrei A. Rusu
Razvan Pascanu
MarcÁurelio Ranzato
OOD
VLM
AI4TS
44
17
0
15 Nov 2022
Sequential Learning Of Neural Networks for Prequential MDL
Sequential Learning Of Neural Networks for Prequential MDL
J. Bornschein
Yazhe Li
Marcus Hutter
AI4TS
32
7
0
14 Oct 2022
When Does Re-initialization Work?
When Does Re-initialization Work?
Sheheryar Zaidi
Tudor Berariu
Hyunjik Kim
J. Bornschein
Claudia Clopath
Yee Whye Teh
Razvan Pascanu
48
10
0
20 Jun 2022
Continual Learning Beyond a Single Model
Continual Learning Beyond a Single Model
T. Doan
Seyed Iman Mirzadeh
Mehrdad Farajtabar
CLL
36
16
0
20 Feb 2022
Unified Scaling Laws for Routed Language Models
Unified Scaling Laws for Routed Language Models
Aidan Clark
Diego de Las Casas
Aurelia Guy
A. Mensch
Michela Paganini
...
Oriol Vinyals
Jack W. Rae
Erich Elsen
Koray Kavukcuoglu
Karen Simonyan
MoE
48
177
0
02 Feb 2022
ProgFed: Effective, Communication, and Computation Efficient Federated
  Learning by Progressive Training
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
Hui-Po Wang
Sebastian U. Stich
Yang He
Mario Fritz
FedML
AI4CE
41
48
0
11 Oct 2021
Firefly Neural Architecture Descent: a General Approach for Growing
  Neural Networks
Firefly Neural Architecture Descent: a General Approach for Growing Neural Networks
Lemeng Wu
Bo Liu
Peter Stone
Qiang Liu
70
55
0
17 Feb 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
268
4,576
0
23 Jan 2020
1