Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.15881
Cited By
LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge Distillation
28 August 2024
Fangxun Shu
Yue Liao
Le Zhuo
Chenning Xu
Guanghao Zhang
Haonan Shi
Long Chen
Tao Zhong
Wanggui He
Siming Fu
Haoyuan Li
Bolin Li
Zhelun Yu
Si Liu
Hongsheng Li
Hao Jiang
VLM
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LLaVA-MoD: Making LLaVA Tiny via MoE Knowledge Distillation"
3 / 3 papers shown
Title
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
Chris Liu
Renrui Zhang
Longtian Qiu
Siyuan Huang
Weifeng Lin
...
Hao Shao
Pan Lu
Hongsheng Li
Yu Qiao
Peng Gao
MLLM
116
106
0
08 Feb 2024
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
A. Kalyan
ELM
ReLM
LRM
198
1,089
0
20 Sep 2022
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
283
5,723
0
29 Apr 2021
1