Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.10237
Cited By
Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models
16 April 2024
Songtao Jiang
Tuo Zheng
Yan Zhang
Yeying Jin
Li Yuan
Zuozhu Liu
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"
10 / 10 papers shown
Title
MicarVLMoE: A Modern Gated Cross-Aligned Vision-Language Mixture of Experts Model for Medical Image Captioning and Report Generation
Amaan Izhar
Nurul Japar
Norisma Idris
Ting Dang
MoE
61
0
0
29 Apr 2025
Multimodal Large Language Models for Medicine: A Comprehensive Survey
Jiarui Ye
Hao Tang
LM&MA
74
0
0
29 Apr 2025
Vision-Language Models for Edge Networks: A Comprehensive Survey
Ahmed Sharshar
Latif U. Khan
Waseem Ullah
Mohsen Guizani
VLM
51
2
0
11 Feb 2025
When Do We Not Need Larger Vision Models?
Baifeng Shi
Ziyang Wu
Maolin Mao
Xin Wang
Trevor Darrell
VLM
LRM
38
23
0
19 Mar 2024
MoAI: Mixture of All Intelligence for Large Language and Vision Models
Byung-Kwan Lee
Beomchan Park
Chae Won Kim
Yonghyun Ro
MLLM
VLM
34
9
0
12 Mar 2024
Towards an empirical understanding of MoE design choices
Dongyang Fan
Bettina Messmer
Martin Jaggi
26
5
0
20 Feb 2024
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Chris Liu
Renrui Zhang
Longtian Qiu
Siyuan Huang
Weifeng Lin
...
Hao Shao
Pan Lu
Hongsheng Li
Yu Qiao
Peng Gao
MLLM
116
106
0
08 Feb 2024
An Empirical Study of Scaling Instruct-Tuned Large Multimodal Models
Yadong Lu
Chunyuan Li
Haotian Liu
Jianwei Yang
Jianfeng Gao
Yelong Shen
MLLM
94
31
0
18 Sep 2023
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis
Fuzhao Xue
Yao Fu
Wangchunshu Zhou
Zangwei Zheng
Yang You
76
74
0
22 May 2023
MedMNIST v2 -- A large-scale lightweight benchmark for 2D and 3D biomedical image classification
Jiancheng Yang
Rui Shi
D. Wei
Zequan Liu
Lin Zhao
B. Ke
Hanspeter Pfister
Bingbing Ni
VLM
158
634
0
27 Oct 2021
1