Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.10595
Cited By
Heterogeneous Multi-task Learning with Expert Diversity
20 June 2021
Raquel Y. S. Aoki
Frederick Tung
Gabriel L. Oliveira
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Heterogeneous Multi-task Learning with Expert Diversity"
15 / 15 papers shown
Title
Mixture of Group Experts for Learning Invariant Representations
Lei Kang
Jia Li
Mi Tian
Hua Huang
MoE
25
0
0
12 Apr 2025
The Transition from Centralized Machine Learning to Federated Learning for Mental Health in Education: A Survey of Current Methods and Future Directions
Maryam Ebrahimi
Rajeev Sahay
Seyyedali Hosseinalipour
Bita Akram
37
1
0
20 Jan 2025
Context-Aware Token Selection and Packing for Enhanced Vision Transformer
Tianyi Zhang
B. Li
Jae-sun Seo
Yu Cao
31
0
0
31 Oct 2024
Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts
Sukwon Yun
Inyoung Choi
Jie Peng
Yangfan Wu
J. Bao
Qiyiwen Zhang
Jiayi Xin
Qi Long
Tianlong Chen
MoE
42
4
0
10 Oct 2024
FlexCare: Leveraging Cross-Task Synergy for Flexible Multimodal Healthcare Prediction
Muhao Xu
Zhenfeng Zhu
Youru Li
Shuai Zheng
Yawei Zhao
Kunlun He
Yao Zhao
33
0
0
17 Jun 2024
Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters
Yuhang Zhou
Zihua Zhao
Haolin Li
Siyuan Du
Jiangchao Yao
Ya Zhang
Yanfeng Wang
MoMe
MoE
27
3
0
14 Jun 2024
Bridging Remote Sensors with Multisensor Geospatial Foundation Models
Boran Han
Shuai Zhang
Xingjian Shi
Markus Reichstein
24
22
0
01 Apr 2024
Multimodal Clinical Trial Outcome Prediction with Large Language Models
Wenhao Zheng
Dongsheng Peng
Hongxia Xu
Yun-Qing Li
Hongtu Zhu
Tianfan Fu
Huaxiu Yao
Huaxiu Yao
38
5
0
09 Feb 2024
Adaptive Gating in Mixture-of-Experts based Language Models
Jiamin Li
Qiang Su
Yitao Yang
Yimin Jiang
Cong Wang
Hong-Yu Xu
MoE
10
5
0
11 Oct 2023
DynaShare: Task and Instance Conditioned Parameter Sharing for Multi-Task Learning
E. Rahimian
Golara Javadi
Frederick Tung
Gabriel L. Oliveira
MoE
17
2
0
26 May 2023
Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers
Tianlong Chen
Zhenyu (Allen) Zhang
Ajay Jaiswal
Shiwei Liu
Zhangyang Wang
MoE
22
46
0
02 Mar 2023
Exploiting Graph Structured Cross-Domain Representation for Multi-Domain Recommendation
Alejandro Ariza-Casabona
Bartlomiej Twardowski
Tri Kurniawan Wijaya
13
4
0
12 Feb 2023
M
3
^3
3
ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design
Hanxue Liang
Zhiwen Fan
Rishov Sarkar
Ziyu Jiang
Tianlong Chen
Kai Zou
Yu Cheng
Cong Hao
Zhangyang Wang
MoE
24
79
0
26 Oct 2022
Multi-treatment Effect Estimation from Biomedical Data
Raquel Y. S. Aoki
Yizhou Chen
M. Ester
8
0
0
14 Dec 2021
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
243
11,659
0
09 Mar 2017
1