Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.12034
Cited By
Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts
17 June 2024
Junmo Kang
Leonid Karlinsky
Hongyin Luo
Zhen Wang
Jacob A. Hansen
James Glass
David D. Cox
Rameswar Panda
Rogerio Feris
Alan Ritter
MoMe
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts"
8 / 8 papers shown
Title
MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs
Yuhang Zhou
Giannis Karamanolakis
Victor Soto
Anna Rumshisky
Mayank Kulkarni
Furong Huang
Wei Ai
Jianhua Lu
MoMe
96
0
0
03 Feb 2025
Supervision-free Vision-Language Alignment
Giorgio Giannone
Ruoteng Li
Qianli Feng
Evgeny Perevodchikov
Rui Chen
Aleix M. Martinez
VLM
56
0
0
08 Jan 2025
Upcycling Instruction Tuning from Dense to Mixture-of-Experts via Parameter Merging
Tingfeng Hui
Zhenyu Zhang
Shuohuan Wang
Yu Sun
Hua-Hong Wu
Sen Su
MoE
11
0
0
02 Oct 2024
Flexible and Effective Mixing of Large Language Models into a Mixture of Domain Experts
Rhui Dih Lee
L. Wynter
R. Ganti
MoE
34
1
0
30 Aug 2024
Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models
Yuncheng Yang
Yulei Qin
Tong Wu
Zihan Xu
Gang Li
...
Yuchen Shi
Ke Li
Xing Sun
Jie Yang
Yun Gu
ALM
OffRL
MoE
46
0
0
28 Aug 2024
Empower Nested Boolean Logic via Self-Supervised Curriculum Learning
Hongqiu Wu
Linfeng Liu
Haizhen Zhao
Min Zhang
LRM
AI4CE
NAI
ELM
27
6
0
09 Oct 2023
Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models
Junmo Kang
Wei-ping Xu
Alan Ritter
30
15
0
02 May 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1