ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.10199
  4. Cited By
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models

BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models

18 June 2021
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
ArXivPDFHTML

Papers citing "BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models"

50 / 815 papers shown
Title
Robust and Efficient Fine-tuning of LLMs with Bayesian
  Reparameterization of Low-Rank Adaptation
Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation
Ayan Sengupta
Vaibhav Seth
Arinjay Pathak
Natraj Raman
Sriram Gopalakrishnan
Tanmoy Chakraborty
BDL
23
2
0
07 Nov 2024
MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for Mamba
MambaPEFT: Exploring Parameter-Efficient Fine-Tuning for Mamba
Masakazu Yoshimura
Teruaki Hayashi
Yota Maeda
Mamba
100
2
0
06 Nov 2024
Efficient and Effective Adaptation of Multimodal Foundation Models in
  Sequential Recommendation
Efficient and Effective Adaptation of Multimodal Foundation Models in Sequential Recommendation
Junchen Fu
Xuri Ge
Xin Xin
Alexandros Karatzoglou
Ioannis Arapakis
Kaiwen Zheng
Yongxin Ni
J. Jose
23
2
0
05 Nov 2024
Scalable Efficient Training of Large Language Models with
  Low-dimensional Projected Attention
Scalable Efficient Training of Large Language Models with Low-dimensional Projected Attention
Xingtai Lv
Ning Ding
Kaiyan Zhang
Ermo Hua
Ganqu Cui
Bowen Zhou
37
1
0
04 Nov 2024
Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test
  Generation: An Empirical Study
Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study
André Storhaug
Jingyue Li
ALM
47
1
0
04 Nov 2024
Expanding Sparse Tuning for Low Memory Usage
Expanding Sparse Tuning for Low Memory Usage
Shufan Shen
Junshu Sun
Xiangyang Ji
Qingming Huang
Shuhui Wang
40
0
0
04 Nov 2024
Rethinking Weight Decay for Robust Fine-Tuning of Foundation Models
Rethinking Weight Decay for Robust Fine-Tuning of Foundation Models
Junjiao Tian
Chengyue Huang
Z. Kira
31
1
0
03 Nov 2024
Bayesian-guided Label Mapping for Visual Reprogramming
Bayesian-guided Label Mapping for Visual Reprogramming
C. Cai
Zesheng Ye
Lei Feng
Jianzhong Qi
Feng Liu
34
2
0
31 Oct 2024
CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning
  for Noisy Label Learning
CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning
Yeachan Kim
Junho Kim
SangKeun Lee
NoLa
AAML
35
2
0
31 Oct 2024
Efficient Adaptation of Pre-trained Vision Transformer via Householder
  Transformation
Efficient Adaptation of Pre-trained Vision Transformer via Householder Transformation
Wei Dong
Yuan Sun
Yiting Yang
Xing Zhang
Zhijun Lin
Qingsen Yan
H. Zhang
Peng Wang
Yang Yang
Hengtao Shen
26
0
0
30 Oct 2024
Capacity Control is an Effective Memorization Mitigation Mechanism in
  Text-Conditional Diffusion Models
Capacity Control is an Effective Memorization Mitigation Mechanism in Text-Conditional Diffusion Models
Raman Dutt
Pedro Sanchez
Ondrej Bohdal
Sotirios A. Tsaftaris
Timothy M. Hospedales
34
1
0
29 Oct 2024
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models
Hang Guo
Yawei Li
Tao Dai
Shu-Tao Xia
Luca Benini
MQ
29
1
0
29 Oct 2024
NeuZip: Memory-Efficient Training and Inference with Dynamic Compression
  of Neural Networks
NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks
Yongchang Hao
Yanshuai Cao
Lili Mou
MQ
30
2
0
28 Oct 2024
GeoLoRA: Geometric integration for parameter efficient fine-tuning
GeoLoRA: Geometric integration for parameter efficient fine-tuning
Steffen Schotthöfer
Emanuele Zangrando
Gianluca Ceruti
Francesco Tudisco
J. Kusch
AI4CE
26
1
0
24 Oct 2024
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
49
3
0
24 Oct 2024
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language
  Models Fine-tuning
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning
Jingfan Zhang
Yi Zhao
Dan Chen
Xing Tian
Huanran Zheng
Wei Zhu
MoE
34
12
0
23 Oct 2024
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language
  Tuning
ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning
Zhiwei Hao
Jianyuan Guo
Li Shen
Yong Luo
Han Hu
Yonggang Wen
VLM
21
0
0
23 Oct 2024
PETAH: Parameter Efficient Task Adaptation for Hybrid Transformers in a
  resource-limited Context
PETAH: Parameter Efficient Task Adaptation for Hybrid Transformers in a resource-limited Context
Maximilian Augustin
Syed Shakib Sarwar
Mostafa Elhoushi
Sai Qian Zhang
Yuecheng Li
B. D. Salvo
25
0
0
23 Oct 2024
Cross-model Control: Improving Multiple Large Language Models in
  One-time Training
Cross-model Control: Improving Multiple Large Language Models in One-time Training
Jiayi Wu
Hao-Lun Sun
Hengyi Cai
Lixin Su
S. Wang
Dawei Yin
Xiang Li
Ming Gao
MU
34
0
0
23 Oct 2024
Understanding Layer Significance in LLM Alignment
Understanding Layer Significance in LLM Alignment
Guangyuan Shi
Zexin Lu
Xiaoyu Dong
Wenlong Zhang
Xuanyu Zhang
Yujie Feng
Xiao-Ming Wu
48
2
0
23 Oct 2024
Towards Real Zero-Shot Camouflaged Object Segmentation without
  Camouflaged Annotations
Towards Real Zero-Shot Camouflaged Object Segmentation without Camouflaged Annotations
Cheng Lei
Jie Fan
Xinran Li
Tianzhu Xiang
Ao Li
Ce Zhu
Le Zhang
28
0
0
22 Oct 2024
GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric
  Learning
GSSF: Generalized Structural Sparse Function for Deep Cross-modal Metric Learning
Haiwen Diao
Ying Zhang
Shang Gao
Jiawen Zhu
Long Chen
Huchuan Lu
29
4
0
20 Oct 2024
FiTv2: Scalable and Improved Flexible Vision Transformer for Diffusion
  Model
FiTv2: Scalable and Improved Flexible Vision Transformer for Diffusion Model
ZiDong Wang
Zeyu Lu
Di Huang
Cai Zhou
Wanli Ouyang
and Lei Bai
76
3
0
17 Oct 2024
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for
  Parameter-Efficient Fine-Tuning
LoLDU: Low-Rank Adaptation via Lower-Diag-Upper Decomposition for Parameter-Efficient Fine-Tuning
Yiming Shi
Jiwei Wei
Yujia Wu
Ran Ran
Chengwei Sun
Shiyuan He
Yang Yang
ALM
35
1
0
17 Oct 2024
Unlocking the Capabilities of Masked Generative Models for Image
  Synthesis via Self-Guidance
Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance
Jiwan Hur
Dong-Jae Lee
Gyojin Han
Jaehyun Choi
Yunho Jeon
Junmo Kim
DiffM
30
0
0
17 Oct 2024
Communication-Efficient and Tensorized Federated Fine-Tuning of Large
  Language Models
Communication-Efficient and Tensorized Federated Fine-Tuning of Large Language Models
Sajjad Ghiasvand
Yifan Yang
Zhiyu Xue
Mahnoosh Alizadeh
Zheng Zhang
Ramtin Pedarsani
FedML
33
3
0
16 Oct 2024
LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models
LoKO: Low-Rank Kalman Optimizer for Online Fine-Tuning of Large Models
Hossein Abdi
Mingfei Sun
Andi Zhang
Samuel Kaski
Wei Pan
23
0
0
15 Oct 2024
Domain-Conditioned Transformer for Fully Test-time Adaptation
Domain-Conditioned Transformer for Fully Test-time Adaptation
Yushun Tang
Shuoshuo Chen
Jiyuan Jia
Yi Zhang
Zhihai He
23
2
0
14 Oct 2024
Is Parameter Collision Hindering Continual Learning in LLMs?
Is Parameter Collision Hindering Continual Learning in LLMs?
Shuo Yang
Kun-Peng Ning
Yu-Yang Liu
Jia-Yu Yao
Yong-Hong Tian
Yi-Bing Song
Li Yuan
MoMe
CLL
18
3
0
14 Oct 2024
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column
  Updates
RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates
Md. Kowsher
Tara Esmaeilbeig
Chun-Nam Yu
Mojtaba Soltanalian
Niloofar Yousefi
27
0
0
14 Oct 2024
Understanding Robustness of Parameter-Efficient Tuning for Image
  Classification
Understanding Robustness of Parameter-Efficient Tuning for Image Classification
Jiacheng Ruan
Xian Gao
Suncheng Xiang
Mingye Xie
Ting Liu
Yuzhuo Fu
AAML
VLM
26
0
0
13 Oct 2024
t-READi: Transformer-Powered Robust and Efficient Multimodal Inference
  for Autonomous Driving
t-READi: Transformer-Powered Robust and Efficient Multimodal Inference for Autonomous Driving
Pengfei Hu
Yuhang Qian
Tianyue Zheng
Ang Li
Zhe Chen
Yue Gao
Xiuzhen Cheng
Jun-Jie Luo
26
0
0
13 Oct 2024
ELICIT: LLM Augmentation via External In-Context Capability
ELICIT: LLM Augmentation via External In-Context Capability
Futing Wang
Jianhao Yan
Yue Zhang
Tao Lin
39
0
0
12 Oct 2024
QEFT: Quantization for Efficient Fine-Tuning of LLMs
QEFT: Quantization for Efficient Fine-Tuning of LLMs
Changhun Lee
Jun-gyu Jin
Younghyun Cho
Eunhyeok Park
MQ
40
1
0
11 Oct 2024
Parameter-Efficient Fine-Tuning of Large Language Models using Semantic
  Knowledge Tuning
Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning
Nusrat Jahan Prottasha
Asif Mahmud
Md. Shohanur Islam Sobuj
Prakash Bhat
Md. Kowsher
Niloofar Yousefi
O. Garibay
30
4
0
11 Oct 2024
Parameter-Efficient Fine-Tuning of State Space Models
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim
Wonjun Kang
Yuchen Zeng
H. Koo
Kangwook Lee
29
4
0
11 Oct 2024
Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud
  Learning
Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning
Dingkang Liang
Tianrui Feng
Xin Zhou
Yumeng Zhang
Zhikang Zou
Xiang Bai
26
5
0
10 Oct 2024
AdaShadow: Responsive Test-time Model Adaptation in Non-stationary
  Mobile Environments
AdaShadow: Responsive Test-time Model Adaptation in Non-stationary Mobile Environments
Cheng Fang
Sicong Liu
Zimu Zhou
Bin Guo
Jiaqi Tang
Ke Ma
Zhiwen Yu
TTA
31
1
0
10 Oct 2024
Packing Analysis: Packing Is More Appropriate for Large Models or
  Datasets in Supervised Fine-tuning
Packing Analysis: Packing Is More Appropriate for Large Models or Datasets in Supervised Fine-tuning
Shuhe Wang
Guoyin Wang
Y. Wang
Jiwei Li
Eduard H. Hovy
Chen Guo
32
4
0
10 Oct 2024
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning
ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning
Yu-Chen Lin
Wei-Hua Li
Jun-Cheng Chen
Chu-Song Chen
25
1
0
10 Oct 2024
QuAILoRA: Quantization-Aware Initialization for LoRA
QuAILoRA: Quantization-Aware Initialization for LoRA
Neal Lawton
Aishwarya Padmakumar
Judith Gaspers
Jack FitzGerald
Anoop Kumar
Greg Ver Steeg
Aram Galstyan
MQ
31
0
0
09 Oct 2024
SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers
SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers
V. Chekalina
Anna Rudenko
Gleb Mezentsev
Alexander Mikhalev
Alexander Panchenko
Ivan V. Oseledets
18
0
0
09 Oct 2024
OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized
  Distributions
OD-Stega: LLM-Based Near-Imperceptible Steganography via Optimized Distributions
Yu-Shin Huang
Peter Just
Krishna Narayanan
Chao Tian
34
4
0
06 Oct 2024
Implicit to Explicit Entropy Regularization: Benchmarking ViT
  Fine-tuning under Noisy Labels
Implicit to Explicit Entropy Regularization: Benchmarking ViT Fine-tuning under Noisy Labels
Maria Marrium
Arif Mahmood
Mohammed Bennamoun
NoLa
AAML
31
0
0
05 Oct 2024
LoRTA: Low Rank Tensor Adaptation of Large Language Models
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
36
0
0
05 Oct 2024
Learning from Offline Foundation Features with Tensor Augmentations
Learning from Offline Foundation Features with Tensor Augmentations
Emir Konuk
Christos Matsoukas
Moein Sorkhei
Phitchapha Lertsiravaramet
Kevin Smith
OffRL
26
1
0
03 Oct 2024
Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts
Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts
Minh Le
Chau Nguyen
Huy Nguyen
Quyen Tran
Trung Le
Nhat Ho
35
4
0
03 Oct 2024
Differentially Private Parameter-Efficient Fine-tuning for Large ASR
  Models
Differentially Private Parameter-Efficient Fine-tuning for Large ASR Models
Hongbin Liu
Lun Wang
Om Thakkar
Abhradeep Thakurta
Arun Narayanan
26
0
0
02 Oct 2024
PrivTuner with Homomorphic Encryption and LoRA: A P3EFT Scheme for
  Privacy-Preserving Parameter-Efficient Fine-Tuning of AI Foundation Models
PrivTuner with Homomorphic Encryption and LoRA: A P3EFT Scheme for Privacy-Preserving Parameter-Efficient Fine-Tuning of AI Foundation Models
Yang Li
Wenhan Yu
Jun Zhao
27
1
0
01 Oct 2024
HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models
HDMoLE: Mixture of LoRA Experts with Hierarchical Routing and Dynamic Thresholds for Fine-Tuning LLM-based ASR Models
Bingshen Mu
Kun Wei
Qijie Shao
Yong Xu
Lei Xie
MoE
39
1
0
30 Sep 2024
Previous
123456...151617
Next