ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.10062
  4. Cited By
OMoE: Diversifying Mixture of Low-Rank Adaptation by Orthogonal Finetuning
v1v2 (latest)

OMoE: Diversifying Mixture of Low-Rank Adaptation by Orthogonal Finetuning

17 January 2025
Jinyuan Feng
Zhiqiang Pu
Tianyi Hu
Dongmin Li
Xiaolin Ai
Huimu Wang
    MoE
ArXiv (abs)PDFHTML

Papers citing "OMoE: Diversifying Mixture of Low-Rank Adaptation by Orthogonal Finetuning"

8 / 8 papers shown
FreeFuse: Multi-Subject LoRA Fusion via Auto Masking at Test Time
FreeFuse: Multi-Subject LoRA Fusion via Auto Masking at Test Time
Yaoli Liu
Yao-Xiang Ding
Kun Zhou
192
0
0
27 Oct 2025
OrthAlign: Orthogonal Subspace Decomposition for Non-Interfering Multi-Objective Alignment
OrthAlign: Orthogonal Subspace Decomposition for Non-Interfering Multi-Objective Alignment
Guanbin Li
Zhihao Xu
Junhao Dong
Jian Zhao
Yuchen Yuan
...
Zhengtao Yao
Huahui Yi
Dongrui Liu
Xinfeng Li
Kun Wang
239
1
0
29 Sep 2025
Orthogonal Finetuning Made Scalable
Orthogonal Finetuning Made Scalable
Zeju Qiu
Weiyang Liu
Adrian Weller
Bernhard Schölkopf
221
1
0
24 Jun 2025
MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation
MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation
Shen Yuan
Yin Zheng
Taifeng Wang
Binbin Liu
Hongteng Xu
MoMe
388
1
0
17 Jun 2025
Two Is Better Than One: Rotations Scale LoRAs
Two Is Better Than One: Rotations Scale LoRAs
Hongcan Guo
Guoshun Nan
Yuan Yang
Diyang Zhang
Haotian Li
...
Yuhan Ran
Xinye Cao
Sicong Leng
Xiaofeng Tao
Xudong Jiang
246
0
0
29 May 2025
CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning
CoMoE: Contrastive Representation for Mixture-of-Experts in Parameter-Efficient Fine-tuning
Jinyuan Feng
Chaopeng Wei
Tenghai Qiu
Tianyi Hu
Zhiqiang Pu
MoE
326
0
0
23 May 2025
FT-MoE: Sustainable-learning Mixture of Experts for Fault-Tolerant Computing
FT-MoE: Sustainable-learning Mixture of Experts for Fault-Tolerant Computing
Wenjing Xiao
Wenhao Song
Miaojiang Chen
Ruikun Luo
MoE
1.1K
0
0
29 Apr 2025
Mixture of Group Experts for Learning Invariant Representations
Mixture of Group Experts for Learning Invariant Representations
Lei Kang
Jia Li
Mi Tian
Hua Huang
MoE
365
0
0
12 Apr 2025
1
Page 1 of 1