ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.07887
  4. Cited By
Learn or Recall? Revisiting Incremental Learning with Pre-trained
  Language Models

Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models

13 December 2023
Junhao Zheng
Shengjie Qiu
Qianli Ma
ArXivPDFHTML

Papers citing "Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models"

14 / 14 papers shown
Title
Gating is Weighting: Understanding Gated Linear Attention through In-context Learning
Gating is Weighting: Understanding Gated Linear Attention through In-context Learning
Yingcong Li
Davoud Ataee Tarzanagh
A. S. Rawat
Maryam Fazel
Samet Oymak
21
0
0
06 Apr 2025
Unlocking Continual Learning Abilities in Language Models
Unlocking Continual Learning Abilities in Language Models
Wenyu Du
Shuang Cheng
Tongxu Luo
Zihan Qiu
Zeyu Huang
Ka Chun Cheung
Reynold Cheng
Jie Fu
KELM
CLL
30
6
0
25 Jun 2024
Towards Incremental Learning in Large Language Models: A Critical Review
Towards Incremental Learning in Large Language Models: A Critical Review
M. Jovanovic
Peter Voss
ELM
CLL
KELM
26
5
0
28 Apr 2024
Incremental Sequence Labeling: A Tale of Two Shifts
Incremental Sequence Labeling: A Tale of Two Shifts
Shengjie Qiu
Junhao Zheng
Zhen Liu
Yicheng Luo
Qianli Ma
CLL
23
6
0
16 Feb 2024
Can LLMs Learn New Concepts Incrementally without Forgetting?
Can LLMs Learn New Concepts Incrementally without Forgetting?
Junhao Zheng
Shengjie Qiu
Qianli Ma
CLL
19
0
0
13 Feb 2024
Continual Learning with Pre-Trained Models: A Survey
Continual Learning with Pre-Trained Models: A Survey
Da-Wei Zhou
Hai-Long Sun
Jingyi Ning
Han-Jia Ye
De-Chuan Zhan
CLL
KELM
26
62
0
29 Jan 2024
Preserving Commonsense Knowledge from Pre-trained Language Models via
  Causal Inference
Preserving Commonsense Knowledge from Pre-trained Language Models via Causal Inference
Junhao Zheng
Qianli Ma
Shengjie Qiu
Yue Wu
Peitian Ma
Junlong Liu
Hu Feng
Xichen Shang
Haibin Chen
AAML
KELM
CML
CLL
67
15
0
19 Jun 2023
Is forgetting less a good inductive bias for forward transfer?
Is forgetting less a good inductive bias for forward transfer?
Jiefeng Chen
Timothy Nguyen
Dilan Görür
Arslan Chaudhry
CLL
49
14
0
14 Mar 2023
Semiparametric Language Models Are Scalable Continual Learners
Semiparametric Language Models Are Scalable Continual Learners
Guangyue Peng
Tao Ge
Si-Qing Chen
Furu Wei
Houfeng Wang
KELM
32
10
0
02 Mar 2023
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
Mingxu Tao
Yansong Feng
Dongyan Zhao
CLL
KELM
14
10
0
02 Mar 2023
Fine-tuned Language Models are Continual Learners
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLL
LRM
132
116
0
24 May 2022
Learning Fast, Learning Slow: A General Continual Learning Method based
  on Complementary Learning System
Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
Elahe Arani
F. Sarfraz
Bahram Zonooz
CLL
51
121
0
29 Jan 2022
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based
  on Prompt Tuning of T5
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
Chengwei Qin
Shafiq R. Joty
CLL
150
96
0
14 Oct 2021
Efficient Intent Detection with Dual Sentence Encoders
Efficient Intent Detection with Dual Sentence Encoders
I. Casanueva
Tadas Temvcinas
D. Gerz
Matthew Henderson
Ivan Vulić
VLM
160
444
0
10 Mar 2020
1