ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.05741
  4. Cited By
Towards Robust and Efficient Continual Language Learning

Towards Robust and Efficient Continual Language Learning

11 July 2023
Adam Fisch
Amal Rannen-Triki
Razvan Pascanu
J. Bornschein
Angeliki Lazaridou
E. Gribovskaya
MarcÁurelio Ranzato
    CLL
ArXivPDFHTML

Papers citing "Towards Robust and Efficient Continual Language Learning"

9 / 9 papers shown
Title
Investigating Continual Pretraining in Large Language Models: Insights and Implications
Investigating Continual Pretraining in Large Language Models: Insights and Implications
cCaugatay Yildiz
Nishaanth Kanna Ravichandran
Prishruit Punia
Matthias Bethge
B. Ermiş
CLL
KELM
LRM
46
23
0
27 Feb 2024
Fine-tuned Language Models are Continual Learners
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLL
LRM
134
116
0
24 May 2022
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in
  Question Answering Models
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models
Adam Livska
Tomávs Kovciský
E. Gribovskaya
Tayfun Terzi
Eren Sezener
...
Susannah Young
Ellen Gilsenan-McMahon
Sophia Austin
Phil Blunsom
Angeliki Lazaridou
KELM
226
89
0
23 May 2022
Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting
  Model Hubs
Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs
Kaichao You
Yong Liu
Ziyang Zhang
Jianmin Wang
Michael I. Jordan
Mingsheng Long
98
30
0
20 Oct 2021
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based
  on Prompt Tuning of T5
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
Chengwei Qin
Shafiq R. Joty
CLL
155
96
0
14 Oct 2021
Towards Continual Knowledge Learning of Language Models
Towards Continual Knowledge Learning of Language Models
Joel Jang
Seonghyeon Ye
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Stanley Jungkyu Choi
Minjoon Seo
CLL
KELM
222
150
0
07 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
A linearized framework and a new benchmark for model selection for
  fine-tuning
A linearized framework and a new benchmark for model selection for fine-tuning
Aditya Deshpande
Alessandro Achille
Avinash Ravichandran
Hao Li
L. Zancato
Charless C. Fowlkes
Rahul Bhotika
Stefano Soatto
Pietro Perona
ALM
105
46
0
29 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
1