ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.17682
  4. Cited By
One Network, Many Masks: Towards More Parameter-Efficient Transfer
  Learning

One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning

28 May 2023
Guangtao Zeng
Peiyuan Zhang
Wei Lu
ArXivPDFHTML

Papers citing "One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning"

18 / 18 papers shown
Title
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
49
3
0
24 Oct 2024
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column
  Updates
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
Md. Kowsher
Tara Esmaeilbeig
Chun-Nam Yu
Mojtaba Soltanalian
Niloofar Yousefi
27
0
0
14 Oct 2024
Propulsion: Steering LLM with Tiny Fine-Tuning
Propulsion: Steering LLM with Tiny Fine-Tuning
Md. Kowsher
Nusrat Jahan Prottasha
Prakash Bhat
38
4
0
17 Sep 2024
Increasing Model Capacity for Free: A Simple Strategy for Parameter
  Efficient Fine-tuning
Increasing Model Capacity for Free: A Simple Strategy for Parameter Efficient Fine-tuning
Haobo Song
Hao Zhao
Soumajit Majumder
Tao Lin
23
3
0
01 Jul 2024
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank
  Distribution
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank Distribution
Yulong Mao
Kaiyu Huang
Changhao Guan
Ganglin Bao
Fengran Mo
Jinan Xu
27
10
0
27 May 2024
RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document
  Abstractive Summarization
RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document Abstractive Summarization
Dongqi Pu
Vera Demberg
38
5
0
01 May 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
144
310
0
21 Mar 2024
Read to Play (R2-Play): Decision Transformer with Multimodal Game
  Instruction
Read to Play (R2-Play): Decision Transformer with Multimodal Game Instruction
Yonggang Jin
Ge Zhang
Hao Zhao
Tianyu Zheng
Jiawei Guo
Liuyu Xiang
Shawn Yue
Stephen W. Huang
Zhaofeng He
Jie Fu
OffRL
29
4
0
06 Feb 2024
What the Weight?! A Unified Framework for Zero-Shot Knowledge
  Composition
What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition
Carolin Holtermann
Markus Frohmann
Navid Rekabsaz
Anne Lauscher
MoMe
22
5
0
23 Jan 2024
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models:
  A Critical Review and Assessment
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment
Lingling Xu
Haoran Xie
S. J. Qin
Xiaohui Tao
F. Wang
46
132
0
19 Dec 2023
Survival of the Most Influential Prompts: Efficient Black-Box Prompt
  Search via Clustering and Pruning
Survival of the Most Influential Prompts: Efficient Black-Box Prompt Search via Clustering and Pruning
Han Zhou
Xingchen Wan
Ivan Vulić
Anna Korhonen
LLMAG
23
15
0
19 Oct 2023
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Prototype-based HyperAdapter for Sample-Efficient Multi-task Tuning
Hao Zhao
Jie Fu
Zhaofeng He
105
6
0
18 Oct 2023
Decomposed Prompt Tuning via Low-Rank Reparameterization
Decomposed Prompt Tuning via Low-Rank Reparameterization
Yao Xiao
Lu Xu
Jiaxi Li
Wei Lu
Xiaoli Li
VLM
17
6
0
16 Oct 2023
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by
  Learning to Scale
ScaLearn: Simple and Highly Parameter-Efficient Task Transfer by Learning to Scale
Markus Frohmann
Carolin Holtermann
Shahed Masoudian
Anne Lauscher
Navid Rekabsaz
29
2
0
02 Oct 2023
IncreLoRA: Incremental Parameter Allocation Method for
  Parameter-Efficient Fine-tuning
IncreLoRA: Incremental Parameter Allocation Method for Parameter-Efficient Fine-tuning
Feiyu F. Zhang
Liangzhi Li
Jun-Cheng Chen
Zhouqian Jiang
Bowen Wang
Yiming Qian
45
32
0
23 Aug 2023
XPrompt: Exploring the Extreme of Prompt Tuning
XPrompt: Exploring the Extreme of Prompt Tuning
Fang Ma
Chen Zhang
Lei Ren
Jingang Wang
Qifan Wang
Wei Yu Wu
Xiaojun Quan
Dawei Song
VLM
110
37
0
10 Oct 2022
Raise a Child in Large Language Model: Towards Effective and
  Generalizable Fine-tuning
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
145
178
0
13 Sep 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
1