ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.05417
  4. Cited By
See Further for Parameter Efficient Fine-tuning by Standing on the
  Shoulders of Decomposition

See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of Decomposition

7 July 2024
Chongjie Si
Xiaokang Yang
Wei Shen
ArXivPDFHTML

Papers citing "See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of Decomposition"

11 / 11 papers shown
Title
Customizing Language Models with Instance-wise LoRA for Sequential Recommendation
Customizing Language Models with Instance-wise LoRA for Sequential Recommendation
Xiaoyu Kong
Jiancan Wu
An Zhang
Leheng Sheng
Hui Lin
Xiang Wang
Xiangnan He
AI4TS
51
4
0
19 Aug 2024
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
130
301
0
21 Mar 2024
LoTR: Low Tensor Rank Weight Adaptation
LoTR: Low Tensor Rank Weight Adaptation
Daniel Bershatsky
Daria Cherniuk
Talgat Daulbaev
A. Mikhalev
Ivan V. Oseledets
54
7
0
02 Feb 2024
Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse
  Finetuning
Unified Low-Resource Sequence Labeling by Sample-Aware Dynamic Sparse Finetuning
Sarkar Snigdha Sarathi Das
Ranran Haoran Zhang
Peng Shi
Wenpeng Yin
Rui Zhang
107
4
0
07 Nov 2023
Customized Segment Anything Model for Medical Image Segmentation
Customized Segment Anything Model for Medical Image Segmentation
Kaiwen Zhang
Dong Liu
MedIm
VLM
95
276
0
26 Apr 2023
AdaptFormer: Adapting Vision Transformers for Scalable Visual
  Recognition
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
141
631
0
26 May 2022
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better
  Translators
MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
Zhixing Tan
Xiangwen Zhang
Shuo Wang
Yang Liu
VLM
LRM
193
51
0
13 Oct 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
229
1,444
0
18 Mar 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
1