Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
All Papers
0 / 0 papers shown
Title
Home
Papers
2508.07785
Cited By
Grove MoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts
11 August 2025
Haoyuan Wu
Haoxing Chen
Xiaodong Chen
Zhanchao Zhou
Tieyuan Chen
Yihong Zhuang
Guoshan Lu
Zenan Huang
Junbo Zhao
Lin Liu
Zhenzhong Lan
Bei Yu
Jianguo Li
MoE
Re-assign community
ArXiv (abs)
PDF
HTML
HuggingFace (22 upvotes)
Github (22★)
Papers citing
"Grove MoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts"
4 / 4 papers shown
Title
Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models
X. S. Hu
Zhanchao Zhou
Ruiqi Liang
Zehuan Li
Wei Wu
Jianguo Li
212
0
0
28 Nov 2025
Route Experts by Sequence, not by Token
Tiansheng Wen
Y. Wang
Aosong Feng
Long Ma
Xinyang Liu
Y. Wang
Lixuan Guo
Bo Chen
Stefanie Jegelka
Chenyu You
MoE
154
1
0
09 Nov 2025
Recycling Pretrained Checkpoints: Orthogonal Growth of Mixture-of-Experts for Efficient Large Language Model Pre-Training
Ruizhe Wang
Yucheng Ding
Xiao Liu
Yaoxiang Wang
Peng Cheng
Baining Guo
Zhengjun Zha
Yeyun Gong
128
0
0
09 Oct 2025
Diversity Is All You Need for Contrastive Learning: Spectral Bounds on Gradient Magnitudes
Peter Ochieng
80
1
0
07 Oct 2025
1