ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.00329
  4. Cited By
Continual Learning with Foundation Models: An Empirical Study of Latent
  Replay

Continual Learning with Foundation Models: An Empirical Study of Latent Replay

30 April 2022
O. Ostapenko
Timothée Lesort
P. Rodríguez
Md Rifat Arefin
Arthur Douillard
Irina Rish
Laurent Charlin
ArXivPDFHTML

Papers citing "Continual Learning with Foundation Models: An Empirical Study of Latent Replay"

15 / 15 papers shown
Title
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer
Zhenrong Liu
J. Huttunen
M. Honkala
CLL
36
0
0
13 May 2025
Bielik 11B v2 Technical Report
Bielik 11B v2 Technical Report
Krzysztof Ociepa
Łukasz Flis
Krzysztof Wróbel
Adrian Gwoździej
Remigiusz Kinas
22
0
0
05 May 2025
Bielik v3 Small: Technical Report
Bielik v3 Small: Technical Report
Krzysztof Ociepa
Łukasz Flis
Remigiusz Kinas
Krzysztof Wróbel
Adrian Gwoździej
25
0
0
05 May 2025
Lightweight Online Adaption for Time Series Foundation Model Forecasts
Lightweight Online Adaption for Time Series Foundation Model Forecasts
Thomas L. Lee
William Toner
Rajkarn Singh
Artjom Joosem
Martin Asenov
AI4TS
36
0
0
18 Feb 2025
Self-Data Distillation for Recovering Quality in Pruned Large Language Models
Self-Data Distillation for Recovering Quality in Pruned Large Language Models
Vithursan Thangarasa
Ganesh Venkatesh
Mike Lasby
Nish Sinnadurai
Sean Lie
SyDa
33
0
0
13 Oct 2024
Future-Proofing Class-Incremental Learning
Future-Proofing Class-Incremental Learning
Quentin Jodelet
Xin Liu
Yin Jun Phua
Tsuyoshi Murata
VLM
34
2
0
04 Apr 2024
Read Between the Layers: Leveraging Multi-Layer Representations for
  Rehearsal-Free Continual Learning with Pre-Trained Models
Read Between the Layers: Leveraging Multi-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models
Kyra Ahrens
Hans Hergen Lehmann
Jae Hee Lee
Stefan Wermter
CLL
18
7
0
13 Dec 2023
Continual Pre-Training of Large Language Models: How to (re)warm your
  model?
Continual Pre-Training of Large Language Models: How to (re)warm your model?
Kshitij Gupta
Benjamin Thérien
Adam Ibrahim
Mats L. Richter
Quentin G. Anthony
Eugene Belilovsky
Irina Rish
Timothée Lesort
KELM
22
99
0
08 Aug 2023
A Comprehensive Survey of AI-Generated Content (AIGC): A History of
  Generative AI from GAN to ChatGPT
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT
Yihan Cao
Siyu Li
Yixin Liu
Zhiling Yan
Yutong Dai
Philip S. Yu
Lichao Sun
24
493
0
07 Mar 2023
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
205
1,654
0
15 Oct 2021
Towards Continual Knowledge Learning of Language Models
Towards Continual Knowledge Learning of Language Models
Joel Jang
Seonghyeon Ye
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Stanley Jungkyu Choi
Minjoon Seo
CLL
KELM
222
150
0
07 Oct 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
292
5,761
0
29 Apr 2021
How Well Does Self-Supervised Pre-Training Perform with Streaming Data?
How Well Does Self-Supervised Pre-Training Perform with Streaming Data?
Dapeng Hu
Shipeng Yan
Qizhengqiu Lu
Lanqing Hong
Hailin Hu
Yifan Zhang
Zhenguo Li
Xinchao Wang
Jiashi Feng
40
28
0
25 Apr 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
166
684
0
22 Apr 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,424
0
23 Jan 2020
1