Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.11501
Cited By
MultiLoRA: Democratizing LoRA for Better Multi-Task Learning
20 November 2023
Yiming Wang
Yu Lin
Xiaodong Zeng
Guannan Zhang
MoMe
Re-assign community
ArXiv
PDF
HTML
Papers citing
"MultiLoRA: Democratizing LoRA for Better Multi-Task Learning"
4 / 4 papers shown
Title
Memory-Efficient LLM Training by Various-Grained Low-Rank Projection of Gradients
Yezhen Wang
Zhouhao Yang
Brian K Chen
Fanyi Pu
Bo-wen Li
Tianyu Gao
Kenji Kawaguchi
34
0
0
03 May 2025
Make LoRA Great Again: Boosting LoRA with Adaptive Singular Values and Mixture-of-Experts Optimization Alignment
Chenghao Fan
Zhenyi Lu
Sichen Liu
Xiaoye Qu
Wei Wei
Chengfeng Gu
Yu-Xi Cheng
MoE
82
0
0
24 Feb 2025
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Jiawei Zhao
Zhenyu (Allen) Zhang
Beidi Chen
Zhangyang Wang
A. Anandkumar
Yuandong Tian
41
173
0
06 Mar 2024
P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
Xiao Liu
Kaixuan Ji
Yicheng Fu
Weng Lam Tam
Zhengxiao Du
Zhilin Yang
Jie Tang
VLM
236
804
0
14 Oct 2021
1