ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.17513
  4. Cited By
The Expressive Power of Low-Rank Adaptation

The Expressive Power of Low-Rank Adaptation

26 October 2023
Yuchen Zeng
Kangwook Lee
ArXivPDFHTML

Papers citing "The Expressive Power of Low-Rank Adaptation"

47 / 47 papers shown
Title
Diffusion Model Quantization: A Review
Diffusion Model Quantization: A Review
Qian Zeng
Chenggong Hu
Mingli Song
Jie Song
MQ
41
0
0
08 May 2025
Fast and Low-Cost Genomic Foundation Models via Outlier Removal
Fast and Low-Cost Genomic Foundation Models via Outlier Removal
Haozheng Luo
Chenghao Qiu
Maojiang Su
Zhihan Zhou
Zoe Mehta
Guo Ye
Jerry Yao-Chieh Hu
Han Liu
AAML
55
0
0
01 May 2025
Theoretical Foundation of Flow-Based Time Series Generation: Provable Approximation, Generalization, and Efficiency
Theoretical Foundation of Flow-Based Time Series Generation: Provable Approximation, Generalization, and Efficiency
Jiangxuan Long
Zhao-quan Song
Chiwun Yang
AI4TS
90
0
0
18 Mar 2025
RaSA: Rank-Sharing Low-Rank Adaptation
RaSA: Rank-Sharing Low-Rank Adaptation
Zhiwei He
Zhaopeng Tu
Xing Wang
Xingyu Chen
Z. Wang
Jiahao Xu
Tian Liang
Wenxiang Jiao
Z. Zhang
Rui Wang
ALM
82
1
0
16 Mar 2025
Understanding the Learning Dynamics of LoRA: A Gradient Flow Perspective on Low-Rank Adaptation in Matrix Factorization
Ziqing Xu
Hancheng Min
Lachlan Ewen MacDonald
Jinqi Luo
Salma Tarmoun
Enrique Mallada
René Vidal
AI4CE
49
0
0
10 Mar 2025
AdaptSR: Low-Rank Adaptation for Efficient and Scalable Real-World Super-Resolution
Cansu Korkmaz
Nancy Mehta
Radu Timofte
62
0
0
10 Mar 2025
GBT-SAM: Adapting a Foundational Deep Learning Model for Generalizable Brain Tumor Segmentation via Efficient Integration of Multi-Parametric MRI Data
GBT-SAM: Adapting a Foundational Deep Learning Model for Generalizable Brain Tumor Segmentation via Efficient Integration of Multi-Parametric MRI Data
Cecilia Diana-Albelda
Roberto Alcover-Couso
Álvaro García-Martín
Jesús Bescós
Marcos Escudero-Viñolo
40
1
0
06 Mar 2025
The impact of allocation strategies in subset learning on the expressive power of neural networks
Ofir Schlisselberg
Ran Darshan
91
0
0
10 Feb 2025
Tensor Product Attention Is All You Need
Tensor Product Attention Is All You Need
Yifan Zhang
Yifeng Liu
Huizhuo Yuan
Zhen Qin
Yang Yuan
Q. Gu
Andrew Chi-Chih Yao
75
9
0
11 Jan 2025
GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning
GraphLoRA: Structure-Aware Contrastive Low-Rank Adaptation for Cross-Graph Transfer Learning
Zhe-Rui Yang
Jindong Han
Chang-Dong Wang
Hao Liu
OffRL
33
1
0
08 Jan 2025
YOLO-UniOW: Efficient Universal Open-World Object Detection
YOLO-UniOW: Efficient Universal Open-World Object Detection
Lihao Liu
Juexiao Feng
Hui Chen
Ao Wang
Lin Song
J. Han
Guiguang Ding
ObjD
VLM
33
2
0
31 Dec 2024
LoRA-Mini : Adaptation Matrices Decomposition and Selective Training
LoRA-Mini : Adaptation Matrices Decomposition and Selective Training
Ayush Singh
Rajdeep Aher
Shivank Garg
64
1
0
24 Nov 2024
The effect of fine-tuning on language model toxicity
The effect of fine-tuning on language model toxicity
Will Hawkins
Brent Mittelstadt
Chris Russell
25
4
0
21 Oct 2024
Fine-grained Attention I/O Complexity: Comprehensive Analysis for
  Backward Passes
Fine-grained Attention I/O Complexity: Comprehensive Analysis for Backward Passes
Xiaoyu Li
Yingyu Liang
Zhenmei Shi
Zhao-quan Song
Yufa Zhou
52
15
0
12 Oct 2024
Parameter-Efficient Fine-Tuning of State Space Models
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim
Wonjun Kang
Yuchen Zeng
H. Koo
Kangwook Lee
29
4
0
11 Oct 2024
How Much Can RAG Help the Reasoning of LLM?
How Much Can RAG Help the Reasoning of LLM?
Jingyu Liu
Jiaen Lin
Yong Liu
LRM
18
9
0
03 Oct 2024
Differentially Private Kernel Density Estimation
Differentially Private Kernel Density Estimation
Erzhi Liu
Jerry Yao-Chieh Hu
Alex Reneau
Zhao Song
Han Liu
56
3
0
03 Sep 2024
MoRe Fine-Tuning with 10x Fewer Parameters
MoRe Fine-Tuning with 10x Fewer Parameters
Wenxuan Tan
Nicholas Roberts
Tzu-Heng Huang
Jitian Zhao
John Cooper
Samuel Guo
Chengyu Duan
Frederic Sala
18
0
0
30 Aug 2024
LBC: Language-Based-Classifier for Out-Of-Variable Generalization
LBC: Language-Based-Classifier for Out-Of-Variable Generalization
Kangjun Noh
Baekryun Seong
Hoyoon Byun
Youngjun Choi
Sungjin Song
Kyungwoo Song
23
0
0
20 Aug 2024
Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks
Memorization Capacity for Additive Fine-Tuning with Small ReLU Networks
Jy-yong Sohn
Dohyun Kwon
Seoyeon An
Kangwook Lee
30
0
0
01 Aug 2024
Parameter-Efficient Fine-Tuning via Circular Convolution
Parameter-Efficient Fine-Tuning via Circular Convolution
Aochuan Chen
Jiashun Cheng
Zijing Liu
Ziqi Gao
Fugee Tsung
Yu Li
Jia Li
53
2
0
27 Jul 2024
Do Large Language Models Have Compositional Ability? An Investigation
  into Limitations and Scalability
Do Large Language Models Have Compositional Ability? An Investigation into Limitations and Scalability
Zhuoyan Xu
Zhenmei Shi
Yingyu Liang
CoGe
LRM
27
27
0
22 Jul 2024
Enhancing Parameter Efficiency and Generalization in Large-Scale Models:
  A Regularized and Masked Low-Rank Adaptation Approach
Enhancing Parameter Efficiency and Generalization in Large-Scale Models: A Regularized and Masked Low-Rank Adaptation Approach
Yuzhu Mao
Siqi Ping
Zihao Zhao
Yang Liu
Wenbo Ding
24
1
0
16 Jul 2024
A Survey on LoRA of Large Language Models
A Survey on LoRA of Large Language Models
Yuren Mao
Yuhang Ge
Yijiang Fan
Wenyi Xu
Yu Mi
Zhonghao Hu
Yunjun Gao
ALM
52
22
0
08 Jul 2024
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead
Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead
Rickard Brüel-Gabrielsson
Jiacheng Zhu
Onkar Bhardwaj
Leshem Choshen
Kristjan Greenewald
Mikhail Yurochkin
Justin Solomon
28
5
0
17 Jun 2024
Exploring Training on Heterogeneous Data with Mixture of Low-rank
  Adapters
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters
Yuhang Zhou
Zihua Zhao
Haolin Li
Siyuan Du
Jiangchao Yao
Ya Zhang
Yanfeng Wang
MoMe
MoE
27
3
0
14 Jun 2024
$\textit{Trans-LoRA}$: towards data-free Transferable Parameter
  Efficient Finetuning
Trans-LoRA\textit{Trans-LoRA}Trans-LoRA: towards data-free Transferable Parameter Efficient Finetuning
Runqian Wang
Soumya Ghosh
David D. Cox
Diego Antognini
Aude Oliva
Rogerio Feris
Leonid Karlinsky
30
1
0
27 May 2024
Understanding Linear Probing then Fine-tuning Language Models from NTK
  Perspective
Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective
Akiyoshi Tomihari
Issei Sato
22
3
0
27 May 2024
LoRA Learns Less and Forgets Less
LoRA Learns Less and Forgets Less
D. Biderman
Jose Javier Gonzalez Ortiz
Jacob P. Portes
Mansheej Paul
Philip Greengard
...
Sam Havens
Vitaliy Chiley
Jonathan Frankle
Cody Blakeney
John P. Cunningham
CLL
25
109
0
15 May 2024
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical
  Visual Language Pre-trained Models
Efficiency in Focus: LayerNorm as a Catalyst for Fine-tuning Medical Visual Language Pre-trained Models
Jiawei Chen
Dingkang Yang
Yue Jiang
Mingcheng Li
Jinjie Wei
Xiaolu Hou
Lihua Zhang
42
6
0
25 Apr 2024
FL-TAC: Enhanced Fine-Tuning in Federated Learning via Low-Rank,
  Task-Specific Adapter Clustering
FL-TAC: Enhanced Fine-Tuning in Federated Learning via Low-Rank, Task-Specific Adapter Clustering
Siqi Ping
Yuzhu Mao
Yang Liu
Xiao-Ping Zhang
Wenbo Ding
FedML
19
3
0
23 Apr 2024
ResLoRA: Identity Residual Mapping in Low-Rank Adaption
ResLoRA: Identity Residual Mapping in Low-Rank Adaption
Shuhua Shi
Shaohan Huang
Minghui Song
Zhoujun Li
Zihan Zhang
Haizhen Huang
Furu Wei
Weiwei Deng
Feng Sun
Qi Zhang
AI4CE
18
14
0
28 Feb 2024
Asymmetry in Low-Rank Adapters of Foundation Models
Asymmetry in Low-Rank Adapters of Foundation Models
Jiacheng Zhu
Kristjan Greenewald
Kimia Nadjahi
Haitz Sáez de Ocáriz Borde
Rickard Brüel-Gabrielsson
Leshem Choshen
Marzyeh Ghassemi
Mikhail Yurochkin
Justin Solomon
34
26
0
26 Feb 2024
LoRA+: Efficient Low Rank Adaptation of Large Models
LoRA+: Efficient Low Rank Adaptation of Large Models
Soufiane Hayou
Nikhil Ghosh
Bin Yu
AI4CE
19
137
0
19 Feb 2024
LoRA Training in the NTK Regime has No Spurious Local Minima
LoRA Training in the NTK Regime has No Spurious Local Minima
Uijeong Jang
Jason D. Lee
Ernest K. Ryu
27
13
0
19 Feb 2024
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture
  of Adapters
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters
Umberto Cappellazzo
Daniele Falavigna
A. Brutti
MoE
25
2
0
01 Feb 2024
From RAG to QA-RAG: Integrating Generative AI for Pharmaceutical
  Regulatory Compliance Process
From RAG to QA-RAG: Integrating Generative AI for Pharmaceutical Regulatory Compliance Process
Jaewoong Kim
Moohong Min
39
11
0
26 Jan 2024
SkyEyeGPT: Unifying Remote Sensing Vision-Language Tasks via Instruction
  Tuning with Large Language Model
SkyEyeGPT: Unifying Remote Sensing Vision-Language Tasks via Instruction Tuning with Large Language Model
Yangfan Zhan
Zhitong Xiong
Yuan. Yuan
MLLM
72
39
0
18 Jan 2024
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models:
  A Critical Review and Assessment
Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment
Lingling Xu
Haoran Xie
S. J. Qin
Xiaohui Tao
F. Wang
27
130
0
19 Dec 2023
RefinedFields: Radiance Fields Refinement for Unconstrained Scenes
RefinedFields: Radiance Fields Refinement for Unconstrained Scenes
Karim Kassab
Antoine Schnepf
Jean-Yves Franceschi
Laurent Caraffa
Jeremie Mary
Valérie Gouet-Brunet
VGen
14
7
0
01 Dec 2023
The Learnability of In-Context Learning
The Learnability of In-Context Learning
Noam Wies
Yoav Levine
Amnon Shashua
114
89
0
14 Mar 2023
A Kernel-Based View of Language Model Fine-Tuning
A Kernel-Based View of Language Model Fine-Tuning
Sadhika Malladi
Alexander Wettig
Dingli Yu
Danqi Chen
Sanjeev Arora
VLM
66
60
0
11 Oct 2022
Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning
Model Reprogramming: Resource-Efficient Cross-Domain Machine Learning
Pin-Yu Chen
VLM
101
57
0
22 Feb 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
278
3,784
0
18 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,927
0
20 Apr 2018
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
123
600
0
14 Feb 2016
1