ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.14152
  4. Cited By
Orthogonal Subspace Learning for Language Model Continual Learning

Orthogonal Subspace Learning for Language Model Continual Learning

22 October 2023
Xiao Wang
Tianze Chen
Qiming Ge
Han Xia
Rong Bao
Rui Zheng
Qi Zhang
Tao Gui
Xuanjing Huang
    CLL
ArXivPDFHTML

Papers citing "Orthogonal Subspace Learning for Language Model Continual Learning"

8 / 8 papers shown
Title
SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning
SEFE: Superficial and Essential Forgetting Eliminator for Multimodal Continual Instruction Tuning
Jinpeng Chen
Runmin Cong
Yuzhi Zhao
Hongzheng Yang
Guangneng Hu
H. Ip
Sam Kwong
CLL
KELM
27
0
0
05 May 2025
AlphaFuse: Learn ID Embeddings for Sequential Recommendation in Null Space of Language Embeddings
AlphaFuse: Learn ID Embeddings for Sequential Recommendation in Null Space of Language Embeddings
Guoqing Hu
An Zhang
Shuo Liu
Zhibo Cai
Xun Yang
X. Wang
14
0
0
27 Apr 2025
Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models
Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models
Andy Zhou
MoMe
65
0
0
13 Mar 2025
Continual Diffusion: Continual Customization of Text-to-Image Diffusion
  with C-LoRA
Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA
James Smith
Yen-Chang Hsu
Lingyu Zhang
Ting Hua
Z. Kira
Yilin Shen
Hongxia Jin
DiffM
99
62
0
12 Apr 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
270
8,441
0
04 Mar 2022
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based
  on Prompt Tuning of T5
LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5
Chengwei Qin
Shafiq R. Joty
CLL
140
76
0
14 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
254
2,999
0
18 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
267
6,003
0
20 Apr 2018
1