ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.08526
  4. Cited By
Can LLMs Learn New Concepts Incrementally without Forgetting?

Can LLMs Learn New Concepts Incrementally without Forgetting?

13 February 2024
Junhao Zheng
Shengjie Qiu
Qianli Ma
    CLL
ArXivPDFHTML

Papers citing "Can LLMs Learn New Concepts Incrementally without Forgetting?"

12 / 12 papers shown
Title
Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language
  Learning
Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning
Yanhui Guo
Shaoyuan Xu
Jinmiao Fu
Jia-Wei Liu
Chaosheng Dong
Bryan Wang
VLM
CLL
25
5
0
22 Apr 2024
Incremental Sequence Labeling: A Tale of Two Shifts
Incremental Sequence Labeling: A Tale of Two Shifts
Shengjie Qiu
Junhao Zheng
Zhen Liu
Yicheng Luo
Qianli Ma
CLL
25
6
0
16 Feb 2024
CITB: A Benchmark for Continual Instruction Tuning
CITB: A Benchmark for Continual Instruction Tuning
Zihan Zhang
Meng Fang
Ling-Hao Chen
Mohammad-Reza Namazi-Rad
ALM
CLL
39
20
0
23 Oct 2023
Is forgetting less a good inductive bias for forward transfer?
Is forgetting less a good inductive bias for forward transfer?
Jiefeng Chen
Timothy Nguyen
Dilan Görür
Arslan Chaudhry
CLL
52
14
0
14 Mar 2023
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
Can BERT Refrain from Forgetting on Sequential Tasks? A Probing Study
Mingxu Tao
Yansong Feng
Dongyan Zhao
CLL
KELM
16
10
0
02 Mar 2023
ReAct: Synergizing Reasoning and Acting in Language Models
ReAct: Synergizing Reasoning and Acting in Language Models
Shunyu Yao
Jeffrey Zhao
Dian Yu
Nan Du
Izhak Shafran
Karthik Narasimhan
Yuan Cao
LLMAG
ReLM
LRM
208
2,413
0
06 Oct 2022
Fine-tuned Language Models are Continual Learners
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLL
LRM
134
116
0
24 May 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
203
1,651
0
15 Oct 2021
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based
  on Prompt Tuning of T5
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
Chengwei Qin
Shafiq R. Joty
CLL
150
96
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
264
1,798
0
14 Dec 2020
Efficient Intent Detection with Dual Sentence Encoders
Efficient Intent Detection with Dual Sentence Encoders
I. Casanueva
Tadas Temvcinas
D. Gerz
Matthew Henderson
Ivan Vulić
VLM
162
444
0
10 Mar 2020
1