ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.10445
  4. Cited By
When Prompt-based Incremental Learning Does Not Meet Strong Pretraining

When Prompt-based Incremental Learning Does Not Meet Strong Pretraining

21 August 2023
Yuyao Tang
Yifan Peng
Weiwen Zheng
    CLL
    VLM
ArXivPDFHTML

Papers citing "When Prompt-based Incremental Learning Does Not Meet Strong Pretraining"

11 / 11 papers shown
Title
POET: Prompt Offset Tuning for Continual Human Action Adaptation
POET: Prompt Offset Tuning for Continual Human Action Adaptation
Prachi Garg
Joseph K J
V. Balasubramanian
Necati Cihan Camgöz
Chengde Wan
Kenrick Kin
Weiguang Si
Shugao Ma
Fernando De la Torre
53
0
0
25 Apr 2025
Adapter-Enhanced Semantic Prompting for Continual Learning
Adapter-Enhanced Semantic Prompting for Continual Learning
Baocai Yin
Ji Zhao
Huajie Jiang
Ningning Hou
Yongli Hu
Amin Beheshti
Ming-Hsuan Yang
Yuankai Qi
CLL
VLM
97
0
0
15 Dec 2024
Not Just Object, But State: Compositional Incremental Learning without Forgetting
Not Just Object, But State: Compositional Incremental Learning without Forgetting
Yanyi Zhang
Binglin Qiu
Qi Jia
Yu Liu
Ran He
CLL
35
0
0
04 Nov 2024
CASA: Class-Agnostic Shared Attributes in Vision-Language Models for Efficient Incremental Object Detection
CASA: Class-Agnostic Shared Attributes in Vision-Language Models for Efficient Incremental Object Detection
Mingyi Guo
Yuyang Liu
Zongying Lin
Peixi Peng
Yonghong Tian
Yonghong Tian
VLM
30
0
0
08 Oct 2024
Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for
  Continual Learning
Exploiting the Semantic Knowledge of Pre-trained Text-Encoders for Continual Learning
Lu Yu
Hesong Li
Ying Fu
J. Weijer
Changsheng Xu
CLL
44
1
0
02 Aug 2024
HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning
HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning
Liyuan Wang
Jingyi Xie
Xingxing Zhang
Hang Su
Jun Zhu
CLL
45
4
0
07 Jul 2024
Learning without Forgetting for Vision-Language Models
Learning without Forgetting for Vision-Language Models
Da-Wei Zhou
Yuanhan Zhang
Jingyi Ning
Jingyi Ning
De-Chuan Zhan
De-Chuan Zhan
Ziwei Liu
VLM
CLL
69
37
0
30 May 2023
Vision Transformers in 2022: An Update on Tiny ImageNet
Vision Transformers in 2022: An Update on Tiny ImageNet
Ethan Huynh
ViT
21
11
0
21 May 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally
  Across Scales and Tasks
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
780
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Distilling Causal Effect of Data in Class-Incremental Learning
Distilling Causal Effect of Data in Class-Incremental Learning
Xinting Hu
Kaihua Tang
C. Miao
Xiansheng Hua
Hanwang Zhang
CML
169
174
0
02 Mar 2021
1