ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Communities
  3. ...

Neighbor communities

0 / 0 papers shown
Title
Top Contributors
Name# Papers# Citations
Social Events
DateLocationEvent
  1. Home
  2. Communities
  3. MoE

Mixture of Experts

MoE
More data

Mixture of Experts (MoE) is a machine learning technique that uses multiple expert models to make predictions. Each expert specializes in different aspects of the data, and a gating network determines which expert to use for a given input. This approach can improve model performance and efficiency.

Neighbor communities

51015

Featured Papers

0 / 0 papers shown
Title

All papers

50 / 1,507 papers shown
Title
GNN-MoE: Context-Aware Patch Routing using GNNs for Parameter-Efficient Domain Generalization
GNN-MoE: Context-Aware Patch Routing using GNNs for Parameter-Efficient Domain Generalization
Mahmoud Soliman
Omar Abdelaziz
Ahmed Radwan
Anand
Mohamed Shehata
MoE
56
0
0
06 Nov 2025
Opportunistic Expert Activation: Batch-Aware Expert Routing for Faster Decode Without Retraining
Opportunistic Expert Activation: Batch-Aware Expert Routing for Faster Decode Without Retraining
Costin-Andrei Oncescu
Qingyang Wu
Wai Tong Chung
Robert Wu
Bryan Gopal
Junxiong Wang
Tri Dao
Ben Athiwaratkun
MoE
24
0
0
04 Nov 2025
DEER: Disentangled Mixture of Experts with Instance-Adaptive Routing for Generalizable Machine-Generated Text Detection
DEER: Disentangled Mixture of Experts with Instance-Adaptive Routing for Generalizable Machine-Generated Text Detection
Guoxin Ma
Xiaoming Liu
Zhanhan Zhang
Chengzhengxu Li
Shengchao Liu
Y. Lan
MoE
12
0
0
03 Nov 2025
Random Initialization of Gated Sparse Adapters
Random Initialization of Gated Sparse Adapters
Vi Retault
Yohaï-Eliel Berreby
CLLMoE
20
0
0
03 Nov 2025
CryptoMoE: Privacy-Preserving and Scalable Mixture of Experts Inference via Balanced Expert Routing
CryptoMoE: Privacy-Preserving and Scalable Mixture of Experts Inference via Balanced Expert Routing
Yifan Zhou
Tianshi Xu
Jue Hong
Ye Wu
Meng Li
MoE
96
0
0
03 Nov 2025
ExpertFlow: Adaptive Expert Scheduling and Memory Coordination for Efficient MoE Inference
ExpertFlow: Adaptive Expert Scheduling and Memory Coordination for Efficient MoE Inference
Zixu Shen
Kexin Chu
Y. Zhang
Dawei Xiang
R. Wu
Wei Zhang
MoE
36
0
0
30 Oct 2025
MoME: Mixture of Visual Language Medical Experts for Medical Imaging Segmentation
MoME: Mixture of Visual Language Medical Experts for Medical Imaging Segmentation
Arghavan Rezvani
Xiangyi Yan
Anthony T. Wu
Kun Han
Pooya Khosravi
Xiaohui Xie
MoE
36
0
0
30 Oct 2025
MossNet: Mixture of State-Space Experts is a Multi-Head Attention
MossNet: Mixture of State-Space Experts is a Multi-Head Attention
Shikhar Tuli
James Smith
Haris Jeelani
Chi-Heng Lin
Abhishek Patel
Vasili Ramanishka
Yen-Chang Hsu
Hongxia Jin
MoE
83
0
0
30 Oct 2025
Mixture-of-Transformers Learn Faster: A Theoretical Study on Classification Problems
Mixture-of-Transformers Learn Faster: A Theoretical Study on Classification Problems
Hongbo Li
Qinhang Wu
Sen-Fon Lin
Yingbin Liang
Ness B. Shroff
MoE
12
0
0
30 Oct 2025
Mixture-of-Experts Operator Transformer for Large-Scale PDE Pre-Training
Mixture-of-Experts Operator Transformer for Large-Scale PDE Pre-Training
Hong Wang
Haiyang Xin
Jie Wang
Xuanze Yang
Fei Zha
Huanshuo Dong
Yan Jiang
MoEAI4CE
207
0
0
29 Oct 2025
Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation
Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation
Inclusion AI
Bowen Ma
Cheng Zou
C. Yan
Chunxiang Jin
...
Zhiqiang Fang
Zhihao Qiu
Ziyuan Huang
Zizheng Yang
Z. He
MLLMMoEVLM
113
0
0
28 Oct 2025
Routing Matters in MoE: Scaling Diffusion Transformers with Explicit Routing Guidance
Routing Matters in MoE: Scaling Diffusion Transformers with Explicit Routing Guidance
Y. X. Wei
Shiwei Zhang
Hangjie Yuan
Yujin Han
Zhekai Chen
...
Difan Zou
Xihui Liu
Yingya Zhang
Yu Liu
Hongming Shan
DiffMMoE
32
0
0
28 Oct 2025
Towards Stable and Effective Reinforcement Learning for Mixture-of-Experts
Towards Stable and Effective Reinforcement Learning for Mixture-of-Experts
Di Zhang
Xun Wu
Shaohan Huang
Y. Hao
Li Dong
Zewen Chi
Lei Sha
Furu Wei
MoE
64
0
0
27 Oct 2025
EMTSF:Extraordinary Mixture of SOTA Models for Time Series Forecasting
EMTSF:Extraordinary Mixture of SOTA Models for Time Series Forecasting
Musleh Alharthi
Kaleel Mahmood
Sarosh Patel
A. Mahmood
AI4TSMoE
121
0
0
27 Oct 2025
Sparsity and Superposition in Mixture of Experts
Sparsity and Superposition in Mixture of Experts
Marmik Chaudhari
Jeremi Nuer
Rome Thorstenson
MoE
60
0
0
26 Oct 2025
Every Activation Boosted: Scaling General Reasoner to 1 Trillion Open Language Foundation
Every Activation Boosted: Scaling General Reasoner to 1 Trillion Open Language Foundation
Ling Team
Ang Li
B. Liu
Binbin Hu
Bing Li
...
Siyuan Li
Song Liu
Ting Guo
Tong Zhao
Wanli Gu
MoEReLMALMLRMAI4CEELM
68
1
0
25 Oct 2025
Metis-HOME: Hybrid Optimized Mixture-of-Experts for Multimodal Reasoning
Metis-HOME: Hybrid Optimized Mixture-of-Experts for Multimodal Reasoning
Xiaohan Lan
Fanfan Liu
Haibo Qiu
Siqi Yang
Delian Ruan
Peng Shi
Lin Ma
MoELRM
32
0
0
23 Oct 2025
A Parameter-Efficient Mixture-of-Experts Framework for Cross-Modal Geo-Localization
A Parameter-Efficient Mixture-of-Experts Framework for Cross-Modal Geo-Localization
Linfeng Li
Jian-jun Zhao
Zepeng Yang
Yuhang Song
Bojun Lin
Tianle Zhang
Yuchen Yuan
C. Zhang
Xuelong Li
MoE
72
0
0
23 Oct 2025
MoE-Prism: Disentangling Monolithic Experts for Elastic MoE Services via Model-System Co-Designs
MoE-Prism: Disentangling Monolithic Experts for Elastic MoE Services via Model-System Co-Designs
Xinfeng Xia
Jiacheng Liu
Xiaofeng Hou
Peng Tang
Mingxuan Zhang
Wenfeng Wang
Chao Li
MoE
44
0
0
22 Oct 2025
HybridEP: Scaling Expert Parallelism to Cross-Datacenter Scenario via Hybrid Expert/Data Transmission
HybridEP: Scaling Expert Parallelism to Cross-Datacenter Scenario via Hybrid Expert/Data Transmission
Weihao Yang
Hao Huang
Donglei Wu
Ningke Li
Yanqi Pan
Qiyang Zheng
Wen Xia
Shiyi Li
Qiang Wang
MoE
48
0
0
22 Oct 2025
MoE-GS: Mixture of Experts for Dynamic Gaussian Splatting
MoE-GS: Mixture of Experts for Dynamic Gaussian Splatting
In-Hwan Jin
Hyeongju Mun
Joonsoo Kim
Kugjin Yun
Kyeongbo Kong
3DGSMoE
105
0
0
22 Oct 2025
ToMMeR -- Efficient Entity Mention Detection from Large Language Models
ToMMeR -- Efficient Entity Mention Detection from Large Language Models
Victor Morand
Nadi Tomeh
Josiane Mothe
Benjamin Piwowarski
MoEVLM
78
0
0
22 Oct 2025
Noise-Conditioned Mixture-of-Experts Framework for Robust Speaker Verification
Noise-Conditioned Mixture-of-Experts Framework for Robust Speaker Verification
Bin Gu
Lipeng Dai
Huipeng Du
Haitao Zhao
Jibo Wei
AAMLMoE
53
0
0
21 Oct 2025
ReXMoE: Reusing Experts with Minimal Overhead in Mixture-of-Experts
ReXMoE: Reusing Experts with Minimal Overhead in Mixture-of-Experts
Zheyue Tan
Ruoyao Xiao
Tao Yuan
Dong Zhou
Weilin Liu
...
Haiyang Xu
Boxun Li
Guohao Dai
Bo Zhao
Yu Wang
MoE
52
0
0
20 Oct 2025
Intelligent Communication Mixture-of-Experts Boosted-Medical Image Segmentation Foundation Model
Intelligent Communication Mixture-of-Experts Boosted-Medical Image Segmentation Foundation Model
Xinwei Zhang
Hu Chen
Zhe Yuan
Sukun Tian
Peng Feng
MoE
39
0
0
20 Oct 2025
Learned Inertial Odometry for Cycling Based on Mixture of Experts Algorithm
Learned Inertial Odometry for Cycling Based on Mixture of Experts Algorithm
Hao Qiao
Yan Wang
Shuo Yang
Xiaoyao Yu
Jian Kuang
X. Niu
MoE
46
0
0
20 Oct 2025
Leave It to the Experts: Detecting Knowledge Distillation via MoE Expert Signatures
Leave It to the Experts: Detecting Knowledge Distillation via MoE Expert Signatures
Pingzhi Li
Morris Yu-Chao Huang
Zhen Tan
Qingquan Song
Jie Peng
Kai Zou
Yu Cheng
Kaidi Xu
Tianlong Chen
MoEAAML
69
0
0
19 Oct 2025
L-MoE: End-to-End Training of a Lightweight Mixture of Low-Rank Adaptation Experts
L-MoE: End-to-End Training of a Lightweight Mixture of Low-Rank Adaptation Experts
Shihao Ji
Zihui Song
MoE
74
0
0
19 Oct 2025
Input Domain Aware MoE: Decoupling Routing Decisions from Task Optimization in Mixture of Experts
Input Domain Aware MoE: Decoupling Routing Decisions from Task Optimization in Mixture of Experts
Yongxiang Hua
H. Cao
Zhou Tao
Bocheng Li
Zihao Wu
Chaohu Liu
Linli Xu
MoE
56
0
0
18 Oct 2025
Modeling Expert Interactions in Sparse Mixture of Experts via Graph Structures
Modeling Expert Interactions in Sparse Mixture of Experts via Graph Structures
Minh Khoi Nguyen Nhat
R. Teo
Laziz U. Abdullaev
Maurice Mok
Viet-Hoang Tran
T. Nguyen
MoE
54
0
0
18 Oct 2025
Mixture of Experts Approaches in Dense Retrieval Tasks
Mixture of Experts Approaches in Dense Retrieval Tasks
Effrosyni Sokli
Pranav Kasela
Georgios Peikos
G. Pasi
MoE
56
0
0
17 Oct 2025
MTmixAtt: Integrating Mixture-of-Experts with Multi-Mix Attention for Large-Scale Recommendation
MTmixAtt: Integrating Mixture-of-Experts with Multi-Mix Attention for Large-Scale Recommendation
Xianyang Qi
Hao Guo
Zhaoyu Hu
Zhirui Kuai
Chang Liu
Hongxiang Lin
Lei Wang
OffRLMoE
80
0
0
17 Oct 2025
MACE: Mixture-of-Experts Accelerated Coordinate Encoding for Large-Scale Scene Localization and Rendering
MACE: Mixture-of-Experts Accelerated Coordinate Encoding for Large-Scale Scene Localization and Rendering
Mingkai Liu
Dikai Fan
Haohua Que
Haojia Gao
Xiao Liu
...
Ruicong Ye
Wanli Qiu
Handong Yao
Ruopeng Zhang
X. Y. Huang
MoE
16
0
0
16 Oct 2025
Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models
Rewiring Experts on the Fly:Continuous Rerouting for Better Online Adaptation in Mixture-of-Expert models
Guinan Su
Yanwu Yang
Li Shen
Lu Yin
Shiwei Liu
Jonas Geiping
MoEKELM
68
1
0
16 Oct 2025
Expertise need not monopolize: Action-Specialized Mixture of Experts for Vision-Language-Action Learning
Expertise need not monopolize: Action-Specialized Mixture of Experts for Vision-Language-Action Learning
Weijie Shen
Y. Liu
Yuhao Wu
Zhixuan Liang
Sijia Gu
...
Yusen Qin
Jiangmiao Pang
Xinping Guan
Xiaokang Yang
Yao Mu
MoE
36
1
0
16 Oct 2025
UniMoE-Audio: Unified Speech and Music Generation with Dynamic-Capacity MoE
UniMoE-Audio: Unified Speech and Music Generation with Dynamic-Capacity MoE
Zhenyu Liu
Yunxin Li
Xuanyu Zhang
Qixun Teng
Shenyuan Jiang
...
Mingjun Zhao
Yu-Syuan Xu
Yancheng He
Baotian Hu
Min Zhang
AuLLMMoE
106
0
0
15 Oct 2025
Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts
Toward Efficient Inference Attacks: Shadow Model Sharing via Mixture-of-Experts
Li Bai
Qingqing Ye
Xinwei Zhang
Sen Zhang
Zi Liang
Jianliang Xu
Haibo Hu
FedMLMIACVMoE
85
0
0
15 Oct 2025
Sparse Subnetwork Enhancement for Underrepresented Languages in Large Language Models
Sparse Subnetwork Enhancement for Underrepresented Languages in Large Language Models
Daniil Gurgurov
Josef van Genabith
Simon Ostermann
MoE
68
0
0
15 Oct 2025
VaultGemma: A Differentially Private Gemma Model
VaultGemma: A Differentially Private Gemma Model
Amer Sinha
Thomas Mesnard
Ryan McKenna
Daogao Liu
Christopher A. Choquette-Choo
...
Borja De Balle Pigem
Prem Eruvbetine
T. Warkentin
Armand Joulin
Ravi KumarAmer Sinha
FedMLMoEVLMMDE
142
1
0
15 Oct 2025
Steer-MoE: Efficient Audio-Language Alignment with a Mixture-of-Experts Steering Module
Steer-MoE: Efficient Audio-Language Alignment with a Mixture-of-Experts Steering Module
Ruitao Feng
Bixi Zhang
Sheng Liang
Zheng Yuan
AuLLMMoELLMSV
67
0
0
15 Oct 2025
Who Speaks for the Trigger? Dynamic Expert Routing in Backdoored Mixture-of-Experts Transformers
Who Speaks for the Trigger? Dynamic Expert Routing in Backdoored Mixture-of-Experts Transformers
Xin Zhao
Xiaojun Chen
Bingshan Liu
Haoyu Gao
Zhendong Zhao
Yilong Chen
MoEAAML
64
0
0
15 Oct 2025
GatePro: Parameter-Free Expert Selection Optimization for Mixture-of-Experts Models
GatePro: Parameter-Free Expert Selection Optimization for Mixture-of-Experts Models
Chen Zheng
Y. Cai
Deyi Liu
Jin Ma
Yiyuan Ma
Y. Yang
Jing Liu
Yutao Zeng
Xun Zhou
Siyuan Qiao
MoE
60
0
0
15 Oct 2025
MoBiLE: Efficient Mixture-of-Experts Inference on Consumer GPU with Mixture of Big Little Experts
MoBiLE: Efficient Mixture-of-Experts Inference on Consumer GPU with Mixture of Big Little Experts
Yushu Zhao
Yubin Qin
Yang Wang
Xiaolong Yang
Huiming Han
Shaojun Wei
Yang Hu
Shouyi Yin
MoE
47
0
0
14 Oct 2025
Scope: Selective Cross-modal Orchestration of Visual Perception Experts
Scope: Selective Cross-modal Orchestration of Visual Perception Experts
Tianyu Zhang
Suyuchen Wang
Chao Wang
Juan A. Rodriguez
Ahmed Masry
Xiangru Jian
Yoshua Bengio
Perouz Taslakian
MoE
90
0
0
14 Oct 2025
Stabilizing MoE Reinforcement Learning by Aligning Training and Inference Routers
Stabilizing MoE Reinforcement Learning by Aligning Training and Inference Routers
Wenhan Ma
Hailin Zhang
Liang Zhao
Yifan Song
Yudong Wang
Zhifang Sui
Fuli Luo
MoE
37
1
0
13 Oct 2025
MeTA-LoRA: Data-Efficient Multi-Task Fine-Tuning for Large Language Models
MeTA-LoRA: Data-Efficient Multi-Task Fine-Tuning for Large Language Models
Bo Cheng
Xu Wang
Jinda Liu
Yi-Ju Chang
Yuan Wu
MoEALM
60
0
0
13 Oct 2025
DND: Boosting Large Language Models with Dynamic Nested Depth
DND: Boosting Large Language Models with Dynamic Nested Depth
Tieyuan Chen
Xiaodong Chen
Haoxing Chen
Zhenzhong Lan
W. Lin
Jianguo Li
MoE
64
0
0
13 Oct 2025
MC#: Mixture Compressor for Mixture-of-Experts Large Models
MC#: Mixture Compressor for Mixture-of-Experts Large Models
Wei Huang
Yue Liao
Yukang Chen
Jianhui Liu
Haoru Tan
Si Liu
Shiming Zhang
Shuicheng Yan
Xiaojuan Qi
MoEMQ
76
0
0
13 Oct 2025
Hierarchical LoRA MoE for Efficient CTR Model Scaling
Hierarchical LoRA MoE for Efficient CTR Model Scaling
Zhichen Zeng
Mengyue Hang
Xiaolong Liu
Xiaoyi Liu
Xiao Lin
...
Chaofei Yang
Yiqun Liu
Hang Yin
Jiyan Yang
Hanghang Tong
MoE
36
0
0
12 Oct 2025
Informed Routing in LLMs: Smarter Token-Level Computation for Faster Inference
Informed Routing in LLMs: Smarter Token-Level Computation for Faster Inference
Chao Han
Yijuan Liang
Zihao Xuan
Daokuan Wu
Wei Zhang
Xiaoyu Shen
MoE
19
0
0
10 Oct 2025
Loading #Papers per Month with "MoE"
Past speakers
Name (-)
Top Contributors
Name (-)
Top Organizations at ResearchTrend.AI
Name (-)
Social Events
DateLocationEvent
No social events available