ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.15982
  4. Cited By
MerA: Merging Pretrained Adapters For Few-Shot Learning

MerA: Merging Pretrained Adapters For Few-Shot Learning

30 August 2023
Shwai He
Run-Ze Fan
Liang Ding
Li Shen
Tianyi Zhou
Dacheng Tao
    MoMe
ArXivPDFHTML

Papers citing "MerA: Merging Pretrained Adapters For Few-Shot Learning"

12 / 12 papers shown
Title
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
Parameter-Efficient Fine-Tuning in Large Models: A Survey of Methodologies
L. Wang
Sheng Chen
Linnan Jiang
Shu Pan
Runze Cai
Sen Yang
Fei Yang
44
3
0
24 Oct 2024
FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous
  Low-Rank Adaptations
FLoRA: Federated Fine-Tuning Large Language Models with Heterogeneous Low-Rank Adaptations
Ziyao Wang
Zheyu Shen
Yexiao He
Guoheng Sun
Hongyi Wang
Lingjuan Lyu
Ang Li
23
25
0
09 Sep 2024
Spectral Adapter: Fine-Tuning in Spectral Space
Spectral Adapter: Fine-Tuning in Spectral Space
Fangzhao Zhang
Mert Pilanci
27
8
0
22 May 2024
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models
Fanxu Meng
Zhaohui Wang
Muhan Zhang
VLM
49
66
0
03 Apr 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
136
301
0
21 Mar 2024
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models:
  A Critical Review and Assessment
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment
Lingling Xu
Haoran Xie
S. J. Qin
Xiaohui Tao
F. Wang
22
130
0
19 Dec 2023
RIGHT: Retrieval-augmented Generation for Mainstream Hashtag
  Recommendation
RIGHT: Retrieval-augmented Generation for Mainstream Hashtag Recommendation
Run-Ze Fan
Yixing Fan
Jiangui Chen
J. Guo
Ruqing Zhang
Xueqi Cheng
66
6
0
16 Dec 2023
Merging Experts into One: Improving Computational Efficiency of Mixture
  of Experts
Merging Experts into One: Improving Computational Efficiency of Mixture of Experts
Shwai He
Run-Ze Fan
Liang Ding
Li Shen
Tianyi Zhou
Dacheng Tao
MoE
MoMe
24
14
0
15 Oct 2023
Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot
  Translation
Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot Translation
Junjie Yang
Liang Ding
Li Shen
Matthieu Labeau
Yibing Zhan
Weifeng Liu
Dacheng Tao
VLM
11
4
0
28 Sep 2023
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
225
204
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1