ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.18035
  4. Cited By
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language
  Models Fine-tuning

MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
23 October 2024
Jingfan Zhang
Yi Zhao
Dan Chen
Xing Tian
Huanran Zheng
Wei Zhu
    MoE
ArXiv (abs)PDFHTMLHuggingFace (2 upvotes)

Papers citing "MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning"

13 / 13 papers shown
Title
FT-MDT: Extracting Decision Trees from Medical Texts via a Novel Low-rank Adaptation Method
FT-MDT: Extracting Decision Trees from Medical Texts via a Novel Low-rank Adaptation Method
Yuheng Li
Jiechao Gao
Wei Han
Wenwen Ouyang
Wei Zhu
Hui Yi Leong
OffRLAI4CE
230
1
0
06 Oct 2025
HiCoLoRA: Addressing Context-Prompt Misalignment via Hierarchical Collaborative LoRA for Zero-Shot DST
HiCoLoRA: Addressing Context-Prompt Misalignment via Hierarchical Collaborative LoRA for Zero-Shot DST
Shuyu Zhang
Yifan Wei
X. Wang
Yanmin Zhu
Yangfan He
Yixuan Weng
Bin Li
VLM
137
0
0
24 Sep 2025
StereoAdapter: Adapting Stereo Depth Estimation to Underwater Scenes
StereoAdapter: Adapting Stereo Depth Estimation to Underwater Scenes
Zhengri Wu
Yiran Wang
Yu Wen
Zeyu Zhang
Biao Wu
Hao Tang
MDE
186
3
0
19 Sep 2025
Bi-LoRA: Efficient Sharpness-Aware Minimization for Fine-Tuning Large-Scale Models
Bi-LoRA: Efficient Sharpness-Aware Minimization for Fine-Tuning Large-Scale Models
Yuhang Liu
Tao Li
Zhehao Huang
Zuopeng Yang
Xiaolin Huang
68
0
0
27 Aug 2025
Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey and Benchmark
Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey and Benchmark
Yi Xin
Jianjiang Yang
Haodi Zhou
Junlong Du
Qi Qin
...
Bin Fu
Xiaokang Yang
Guangtao Zhai
Ming-Hsuan Yang
Xiaohong Liu
VLM
457
86
0
01 Jul 2025
Little By Little: Continual Learning via Self-Activated Sparse Mixture-of-Rank Adaptive Learning
Little By Little: Continual Learning via Self-Activated Sparse Mixture-of-Rank Adaptive Learning
Haodong Lu
Chongyang Zhao
Jason Xue
Lina Yao
Kristen Moore
Dong Gong
CLLMoMeMoE
186
2
0
26 Jun 2025
$μ$-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts
μμμ-MoE: Test-Time Pruning as Micro-Grained Mixture-of-Experts
T. Koike-Akino
Jing Liu
Ye Wang
MoE
166
0
0
24 May 2025
CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning
CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning
Jinyuan Feng
Chaopeng Wei
Tenghai Qiu
Tianyi Hu
Zhiqiang Pu
MoE
279
0
0
23 May 2025
Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-Resolution
Distillation-Supervised Convolutional Low-Rank Adaptation for Efficient Image Super-Resolution
Xinning Chai
Yao Zhang
Yuxuan Zhang
Zhengxue Cheng
Yingsheng Qin
Yucai Yang
Li Song
77
4
0
15 Apr 2025
Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models
Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models
Andy Zhou
MoMe
299
2
0
13 Mar 2025
K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs
K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAsComputer Vision and Pattern Recognition (CVPR), 2025
Ziheng Ouyang
Zhen Li
Qibin Hou
MoMeOffRL
384
26
0
25 Feb 2025
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA
Adaptive Rank, Reduced Forgetting: Knowledge Retention in Continual Learning Vision-Language Models with Dynamic Rank-Selective LoRA
Haodong Lu
Chongyang Zhao
Jason Xue
Lina Yao
Kristen Moore
Dong Gong
CLLKELMVLM
685
15
0
01 Dec 2024
LoRTA: Low Rank Tensor Adaptation of Large Language Models
LoRTA: Low Rank Tensor Adaptation of Large Language Models
Ignacio Hounie
Charilaos I. Kanatsoulis
Arnuv Tandon
Alejandro Ribeiro
429
2
0
05 Oct 2024
1