ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.18035
  4. Cited By
MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language
  Models Fine-tuning

MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning

23 October 2024
Jingfan Zhang
Yi Zhao
Dan Chen
Xing Tian
Huanran Zheng
Wei Zhu
    MoE
ArXivPDFHTML

Papers citing "MiLoRA: Efficient Mixture of Low-Rank Adaptation for Large Language Models Fine-tuning"

1 / 1 papers shown
Title
Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models
Compositional Subspace Representation Fine-tuning for Adaptive Large Language Models
Andy Zhou
MoMe
87
0
0
13 Mar 2025
1