ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.09113
  4. Cited By
AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based
  on Meta Learning
v1v2 (latest)

AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning

North American Chapter of the Association for Computational Linguistics (NAACL), 2024
14 March 2024
Ruiyi Zhang
Rushi Qiang
Sai Ashish Somayajula
Pengtao Xie
ArXiv (abs)PDFHTMLGithub

Papers citing "AutoLoRA: Automatically Tuning Matrix Ranks in Low-Rank Adaptation Based on Meta Learning"

8 / 8 papers shown
LoRMA: Low-Rank Multiplicative Adaptation for LLMs
LoRMA: Low-Rank Multiplicative Adaptation for LLMsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Harsh Bihany
Shubham Patel
Ashutosh Modi
211
1
0
09 Jun 2025
The Future of Continual Learning in the Era of Foundation Models: Three Key Directions
The Future of Continual Learning in the Era of Foundation Models: Three Key Directions
Jack Bell
Luigi Quarantiello
Eric Nuertey Coleman
Lanpei Li
Malio Li
Mauro Madeddu
Elia Piccoli
Vincenzo Lomonaco
KELM
406
8
0
03 Jun 2025
Improved Representation Steering for Language Models
Improved Representation Steering for Language Models
Zhengxuan Wu
Qinan Yu
Aryaman Arora
Christopher D. Manning
Christopher Potts
LLMSV
279
9
0
27 May 2025
DeLoRA: Decoupling Angles and Strength in Low-rank Adaptation
DeLoRA: Decoupling Angles and Strength in Low-rank AdaptationInternational Conference on Learning Representations (ICLR), 2025
Massimo Bini
Leander Girrbach
Zeynep Akata
589
8
0
23 Mar 2025
HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
HaLoRA: Hardware-aware Low-Rank Adaptation for Large Language Models Based on Hybrid Compute-in-Memory Architecture
Taiqiang Wu
Chenchen Ding
Wenyong Zhou
Yuxin Cheng
Xincheng Feng
Shuqi Wang
Chufan Shi
Ziyue Liu
Ngai Wong
Ngai Wong
387
1
0
27 Feb 2025
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
RandLoRA: Full-rank parameter-efficient fine-tuning of large modelsInternational Conference on Learning Representations (ICLR), 2025
Paul Albert
Frederic Z. Zhang
Hemanth Saratchandran
Cristian Rodriguez-Opazo
Anton van den Hengel
Ehsan Abbasnejad
692
26
0
03 Feb 2025
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation
BiDoRA: Bi-level Optimization-Based Weight-Decomposed Low-Rank Adaptation
Peijia Qin
Ruiyi Zhang
Pengtao Xie
272
4
0
13 Oct 2024
Structure-Preserving Network Compression Via Low-Rank Induced Training
  Through Linear Layers Composition
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition
Xitong Zhang
Ismail Alkhouri
Rongrong Wang
324
1
0
06 May 2024
1
Page 1 of 1