ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.10199
  4. Cited By
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models

BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models

18 June 2021
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
ArXivPDFHTML

Papers citing "BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models"

50 / 816 papers shown
Title
Multimodal Instruction Tuning with Conditional Mixture of LoRA
Multimodal Instruction Tuning with Conditional Mixture of LoRA
Ying Shen
Zhiyang Xu
Qifan Wang
Yu Cheng
Wenpeng Yin
Lifu Huang
39
13
0
24 Feb 2024
Parameter-efficient Prompt Learning for 3D Point Cloud Understanding
Parameter-efficient Prompt Learning for 3D Point Cloud Understanding
Hongyu Sun
Yongcai Wang
Wang Chen
Haoran Deng
Deying Li
VPVLM
49
5
0
24 Feb 2024
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM
  Fine-Tuning
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning
Yong Liu
Zirui Zhu
Chaoyu Gong
Minhao Cheng
Cho-Jui Hsieh
Yang You
MoE
37
16
0
24 Feb 2024
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Advancing Parameter Efficiency in Fine-tuning via Representation Editing
Muling Wu
Wenhao Liu
Xiaohua Wang
Tianlong Li
Changze Lv
Zixuan Ling
Jianhao Zhu
Cenyuan Zhang
Xiaoqing Zheng
Xuanjing Huang
20
19
0
23 Feb 2024
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables
  Parameter-Efficient Transfer Learning
PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning
Zhisheng Lin
Han Fu
Chenghao Liu
Zhuo Li
Jianling Sun
MoE
MoMe
30
5
0
23 Feb 2024
Q-Probe: A Lightweight Approach to Reward Maximization for Language
  Models
Q-Probe: A Lightweight Approach to Reward Maximization for Language Models
Kenneth Li
Samy Jelassi
Hugh Zhang
Sham Kakade
Martin Wattenberg
David Brandfonbrener
32
9
0
22 Feb 2024
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap
  for Prompt-Based Large Language Models and Beyond
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap for Prompt-Based Large Language Models and Beyond
Xinyu Wang
Hainiu Xu
Lin Gui
Yulan He
MoMe
AIFin
36
1
0
22 Feb 2024
CST: Calibration Side-Tuning for Parameter and Memory Efficient Transfer
  Learning
CST: Calibration Side-Tuning for Parameter and Memory Efficient Transfer Learning
Feng Chen
26
0
0
20 Feb 2024
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
Zhihao Wen
Jie Zhang
Yuan Fang
MoE
34
3
0
19 Feb 2024
Parameter Efficient Finetuning for Speech Emotion Recognition and Domain
  Adaptation
Parameter Efficient Finetuning for Speech Emotion Recognition and Domain Adaptation
Nineli Lashkarashvili
Wen Wu
Guangzhi Sun
P. Woodland
30
5
0
19 Feb 2024
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned
  Language Models through Task Arithmetic
Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic
Rishabh Bhardwaj
Do Duc Anh
Soujanya Poria
MoMe
50
36
0
19 Feb 2024
GNNavi: Navigating the Information Flow in Large Language Models by
  Graph Neural Network
GNNavi: Navigating the Information Flow in Large Language Models by Graph Neural Network
Shuzhou Yuan
Ercong Nie
Michael Farber
Helmut Schmid
Hinrich Schütze
37
3
0
18 Feb 2024
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for
  Ultra-Low-Parameter Fine-Tuning of Large Language Models
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models
Yifan Yang
Jiajun Zhou
Ngai Wong
Zheng Zhang
21
7
0
18 Feb 2024
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of
  Language Models
Prompt-Based Bias Calibration for Better Zero/Few-Shot Learning of Language Models
Kang He
Yinghan Long
Kaushik Roy
28
2
0
15 Feb 2024
Both Matter: Enhancing the Emotional Intelligence of Large Language
  Models without Compromising the General Intelligence
Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence
Weixiang Zhao
Zhuojun Li
Shilong Wang
Yang Wang
Yulin Hu
Yanyan Zhao
Chen Wei
Bing Qin
22
4
0
15 Feb 2024
Model Compression and Efficient Inference for Large Language Models: A
  Survey
Model Compression and Efficient Inference for Large Language Models: A Survey
Wenxiao Wang
Wei Chen
Yicong Luo
Yongliu Long
Zhengkai Lin
Liye Zhang
Binbin Lin
Deng Cai
Xiaofei He
MQ
41
47
0
15 Feb 2024
Quantified Task Misalignment to Inform PEFT: An Exploration of Domain
  Generalization and Catastrophic Forgetting in CLIP
Quantified Task Misalignment to Inform PEFT: An Exploration of Domain Generalization and Catastrophic Forgetting in CLIP
Laura Niss
Kevin Vogt-Lowell
Theodoros Tsiligkaridis
CLL
27
1
0
14 Feb 2024
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
Haeju Lee
Minchan Jeong
SeYoung Yun
Kee-Eung Kim
AAML
VPVLM
53
2
0
13 Feb 2024
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
Let Your Graph Do the Talking: Encoding Structured Data for LLMs
Bryan Perozzi
Bahare Fatemi
Dustin Zelle
Anton Tsitsulin
Mehran Kazemi
Rami Al-Rfou
Jonathan J. Halcrow
GNN
39
55
0
08 Feb 2024
ConvLoRA and AdaBN based Domain Adaptation via Self-Training
ConvLoRA and AdaBN based Domain Adaptation via Self-Training
Sidra Aleem
J. Dietlmeier
Eric Arazo
Suzanne Little
25
7
0
07 Feb 2024
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning
Low-rank Attention Side-Tuning for Parameter-Efficient Fine-Tuning
Ningyuan Tang
Minghao Fu
Ke Zhu
Jianxin Wu
48
7
0
06 Feb 2024
Flora: Low-Rank Adapters Are Secretly Gradient Compressors
Flora: Low-Rank Adapters Are Secretly Gradient Compressors
Yongchang Hao
Yanshuai Cao
Lili Mou
16
39
0
05 Feb 2024
Time-, Memory- and Parameter-Efficient Visual Adaptation
Time-, Memory- and Parameter-Efficient Visual Adaptation
Otniel-Bogdan Mercea
Alexey Gritsenko
Cordelia Schmid
Anurag Arnab
VLM
35
13
0
05 Feb 2024
Revisiting the Power of Prompt for Visual Tuning
Revisiting the Power of Prompt for Visual Tuning
Yuzhu Wang
Lechao Cheng
Chaowei Fang
Dingwen Zhang
Manni Duan
Meng Wang
VLM
48
14
0
04 Feb 2024
Advancing Graph Representation Learning with Large Language Models: A
  Comprehensive Survey of Techniques
Advancing Graph Representation Learning with Large Language Models: A Comprehensive Survey of Techniques
Qiheng Mao
Zemin Liu
Chenghao Liu
Zhuo Li
Jianling Sun
16
7
0
04 Feb 2024
Learning Semantic Proxies from Visual Prompts for Parameter-Efficient
  Fine-Tuning in Deep Metric Learning
Learning Semantic Proxies from Visual Prompts for Parameter-Efficient Fine-Tuning in Deep Metric Learning
Li Ren
Chen Chen
Liqiang Wang
Kien Hua
24
4
0
04 Feb 2024
Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey
Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey
Yi Xin
Jianjiang Yang
Haodi Zhou
Junlong Du
Junlong Du
Yue Fan
Qing Li
Qing Li
Yuntao Du
VLM
73
75
0
03 Feb 2024
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture
  of Adapters
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters
Umberto Cappellazzo
Daniele Falavigna
A. Brutti
MoE
38
2
0
01 Feb 2024
SA-MDKIF: A Scalable and Adaptable Medical Domain Knowledge Injection
  Framework for Large Language Models
SA-MDKIF: A Scalable and Adaptable Medical Domain Knowledge Injection Framework for Large Language Models
Tianhan Xu
Zhe Hu
LingSen Chen
Bin Li
LM&MA
24
1
0
01 Feb 2024
Convolution Meets LoRA: Parameter Efficient Finetuning for Segment
  Anything Model
Convolution Meets LoRA: Parameter Efficient Finetuning for Segment Anything Model
Zihan Zhong
Zhiqiang Tang
Tong He
Haoyang Fang
Chun Yuan
46
41
0
31 Jan 2024
EarthGPT: A Universal Multi-modal Large Language Model for Multi-sensor
  Image Comprehension in Remote Sensing Domain
EarthGPT: A Universal Multi-modal Large Language Model for Multi-sensor Image Comprehension in Remote Sensing Domain
Wei Zhang
Miaoxin Cai
Tong Zhang
Zhuang Yin
Xuerui Mao
24
88
0
30 Jan 2024
A Comprehensive Survey of Compression Algorithms for Language Models
A Comprehensive Survey of Compression Algorithms for Language Models
Seungcheol Park
Jaehyeon Choi
Sojin Lee
U. Kang
MQ
29
12
0
27 Jan 2024
HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy
HiFT: A Hierarchical Full Parameter Fine-Tuning Strategy
Yongkang Liu
Yiqun Zhang
Qian Li
Tong Liu
Shi Feng
Daling Wang
Yifei Zhang
Hinrich Schütze
38
6
0
26 Jan 2024
The Risk of Federated Learning to Skew Fine-Tuning Features and
  Underperform Out-of-Distribution Robustness
The Risk of Federated Learning to Skew Fine-Tuning Features and Underperform Out-of-Distribution Robustness
Mengyao Du
Miao Zhang
Yuwen Pu
Kai Xu
Shouling Ji
Quanjun Yin
35
1
0
25 Jan 2024
Dynamic Layer Tying for Parameter-Efficient Transformers
Dynamic Layer Tying for Parameter-Efficient Transformers
Tamir David Hay
Lior Wolf
25
3
0
23 Jan 2024
SLANG: New Concept Comprehension of Large Language Models
SLANG: New Concept Comprehension of Large Language Models
Lingrui Mei
Shenghua Liu
Yiwei Wang
Baolong Bi
Xueqi Chen
KELM
37
5
0
23 Jan 2024
APT: Adaptive Pruning and Tuning Pretrained Language Models for
  Efficient Training and Inference
APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Bowen Zhao
Hannaneh Hajishirzi
Qingqing Cao
21
17
0
22 Jan 2024
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
Nadav Benedek
Lior Wolf
24
5
0
20 Jan 2024
SAPT: A Shared Attention Framework for Parameter-Efficient Continual
  Learning of Large Language Models
SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models
Weixiang Zhao
Shilong Wang
Yulin Hu
Yanyan Zhao
Bing Qin
Xuanyu Zhang
Qing Yang
Dongliang Xu
Wanxiang Che
KELM
CLL
29
11
0
16 Jan 2024
Editing Arbitrary Propositions in LLMs without Subject Labels
Editing Arbitrary Propositions in LLMs without Subject Labels
Itai Feigenbaum
Devansh Arpit
Huan Wang
Shelby Heinecke
Juan Carlos Niebles
Weiran Yao
Caiming Xiong
Silvio Savarese
KELM
19
2
0
15 Jan 2024
PersianMind: A Cross-Lingual Persian-English Large Language Model
PersianMind: A Cross-Lingual Persian-English Large Language Model
Pedram Rostami
Ali Salemi
M. Dousti
CLL
LRM
24
5
0
12 Jan 2024
Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation
  Models
Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models
Yae Jee Cho
Luyang Liu
Zheng Xu
Aldi Fahrezi
Gauri Joshi
38
45
0
12 Jan 2024
Scaling Laws for Forgetting When Fine-Tuning Large Language Models
Scaling Laws for Forgetting When Fine-Tuning Large Language Models
Damjan Kalajdzievski
CLL
36
8
0
11 Jan 2024
A Survey on Efficient Federated Learning Methods for Foundation Model
  Training
A Survey on Efficient Federated Learning Methods for Foundation Model Training
Herbert Woisetschläger
Alexander Isenko
Shiqiang Wang
R. Mayer
Hans-Arno Jacobsen
FedML
65
23
0
09 Jan 2024
Empirical Analysis of Efficient Fine-Tuning Methods for Large
  Pre-Trained Language Models
Empirical Analysis of Efficient Fine-Tuning Methods for Large Pre-Trained Language Models
Nigel Doering
Cyril Gorlla
Trevor Tuttle
Adhvaith Vijay
20
1
0
08 Jan 2024
Data-Centric Foundation Models in Computational Healthcare: A Survey
Data-Centric Foundation Models in Computational Healthcare: A Survey
Yunkun Zhang
Jin Gao
Zheling Tan
Lingfeng Zhou
Kexin Ding
Mu Zhou
Shaoting Zhang
Dequan Wang
AI4CE
37
22
0
04 Jan 2024
Astraios: Parameter-Efficient Instruction Tuning Code Large Language
  Models
Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models
Terry Yue Zhuo
A. Zebaze
Nitchakarn Suppattarachai
Leandro von Werra
H. D. Vries
Qian Liu
Niklas Muennighoff
ALM
33
15
0
01 Jan 2024
Differentially Private Low-Rank Adaptation of Large Language Model Using
  Federated Learning
Differentially Private Low-Rank Adaptation of Large Language Model Using Federated Learning
Xiao-Yang Liu
Rongyi Zhu
Daochen Zha
Jiechao Gao
Shan Zhong
Matt White
Meikang Qiu
23
15
0
29 Dec 2023
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on
  Software Engineering Tasks
A Comprehensive Evaluation of Parameter-Efficient Fine-Tuning on Software Engineering Tasks
Wentao Zou
Qi Li
Jidong Ge
Chuanyi Li
Xiaoyu Shen
LiGuo Huang
Bin Luo
24
5
0
25 Dec 2023
A Split-and-Privatize Framework for Large Language Model Fine-Tuning
A Split-and-Privatize Framework for Large Language Model Fine-Tuning
Xicong Shen
Yang Liu
Huiqi Liu
Jue Hong
Bing Duan
Zirui Huang
Yunlong Mao
Ye Wu
Di Wu
65
11
0
25 Dec 2023
Previous
123...789...151617
Next