ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.12034
  4. Cited By
Self-MoE: Towards Compositional Large Language Models with
  Self-Specialized Experts

Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts

17 June 2024
Junmo Kang
Leonid Karlinsky
Hongyin Luo
Zhen Wang
Jacob A. Hansen
James Glass
David D. Cox
Rameswar Panda
Rogerio Feris
Alan Ritter
    MoMe
    MoE
ArXivPDFHTML

Papers citing "Self-MoE: Towards Compositional Large Language Models with Self-Specialized Experts"

3 / 3 papers shown
Title
Empower Nested Boolean Logic via Self-Supervised Curriculum Learning
Empower Nested Boolean Logic via Self-Supervised Curriculum Learning
Hongqiu Wu
Linfeng Liu
Haizhen Zhao
Min Zhang
LRM
AI4CE
NAI
ELM
27
6
0
09 Oct 2023
Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models
Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models
Junmo Kang
Wei-ping Xu
Alan Ritter
25
15
0
02 May 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1