ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.13024
  4. Cited By
Towards Anytime Fine-tuning: Continually Pre-trained Language Models
  with Hypernetwork Prompt

Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompt

19 October 2023
Gangwei Jiang
Caigao Jiang
Siqiao Xue
James Y. Zhang
Junqing Zhou
Defu Lian
Ying Wei
    VLM
ArXivPDFHTML

Papers citing "Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompt"

7 / 7 papers shown
Title
Prompt-Consistency Image Generation (PCIG): A Unified Framework
  Integrating LLMs, Knowledge Graphs, and Controllable Diffusion Models
Prompt-Consistency Image Generation (PCIG): A Unified Framework Integrating LLMs, Knowledge Graphs, and Controllable Diffusion Models
Yichen Sun
Zhixuan Chu
Zhan Qin
Kui Ren
DiffM
30
0
0
24 Jun 2024
Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment
  Classification Tasks
Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks
Zixuan Ke
Hu Xu
Bing-Quan Liu
CLL
222
83
0
06 Dec 2021
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Brian Lester
Noah Constant
Rami Al-Rfou
Daniel Matthew Cer
VLM
LRM
137
277
0
15 Oct 2021
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based
  on Prompt Tuning of T5
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
Chengwei Qin
Shafiq R. Joty
CLL
170
97
0
14 Oct 2021
Towards Continual Knowledge Learning of Language Models
Towards Continual Knowledge Learning of Language Models
Joel Jang
Seonghyeon Ye
Sohee Yang
Joongbo Shin
Janghoon Han
Gyeonghun Kim
Stanley Jungkyu Choi
Minjoon Seo
CLL
KELM
222
150
0
07 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,835
0
18 Apr 2021
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
241
1,444
0
18 Mar 2020
1