ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.05229
  4. Cited By
HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning

HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning

7 July 2024
Liyuan Wang
Jingyi Xie
Xingxing Zhang
Hang Su
Jun Zhu
    CLL
ArXivPDFHTML

Papers citing "HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning"

5 / 5 papers shown
Title
SLCA++: Unleash the Power of Sequential Fine-tuning for Continual
  Learning with Pre-training
SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training
Gengwei Zhang
Liyuan Wang
Guoliang Kang
Ling Chen
Yunchao Wei
VLM
CLL
29
2
0
15 Aug 2024
First Session Adaptation: A Strong Replay-Free Baseline for
  Class-Incremental Learning
First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning
A. Panos
Yuriko Kobe
Daniel Olmeda Reino
Rahaf Aljundi
Richard E. Turner
CLL
90
24
0
23 Mar 2023
Generalized Out-of-Distribution Detection: A Survey
Generalized Out-of-Distribution Detection: A Survey
Jingkang Yang
Kaiyang Zhou
Yixuan Li
Ziwei Liu
162
812
0
21 Oct 2021
ImageNet-21K Pretraining for the Masses
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
154
676
0
22 Apr 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
275
3,784
0
18 Apr 2021
1