ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.08295
  4. Cited By
SLCA++: Unleash the Power of Sequential Fine-tuning for Continual
  Learning with Pre-training

SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training

15 August 2024
Gengwei Zhang
Liyuan Wang
Guoliang Kang
Ling Chen
Yunchao Wei
    VLM
    CLL
ArXivPDFHTML

Papers citing "SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training"

6 / 6 papers shown
Title
DUKAE: DUal-level Knowledge Accumulation and Ensemble for Pre-Trained Model-Based Continual Learning
DUKAE: DUal-level Knowledge Accumulation and Ensemble for Pre-Trained Model-Based Continual Learning
Songze Li
Tonghua Su
Xu-Yao Zhang
Qixing Xu
Zhongjie Wang
CLL
31
0
0
09 Apr 2025
Continual evaluation for lifelong learning: Identifying the stability
  gap
Continual evaluation for lifelong learning: Identifying the stability gap
Matthias De Lange
Gido M. van de Ven
Tinne Tuytelaars
CLL
86
40
0
26 May 2022
Mitigating Neural Network Overconfidence with Logit Normalization
Mitigating Neural Network Overconfidence with Logit Normalization
Hongxin Wei
Renchunzi Xie
Hao-Ran Cheng
Lei Feng
Bo An
Yixuan Li
OODD
163
258
0
19 May 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
154
676
0
22 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
1