Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.07653
Cited By
AffectGPT: Dataset and Framework for Explainable Multimodal Emotion Recognition
10 July 2024
Zheng Lian
Haiyang Sun
Licai Sun
Jiangyan Yi
Bin Liu
Jianhua Tao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AffectGPT: Dataset and Framework for Explainable Multimodal Emotion Recognition"
4 / 4 papers shown
Title
OneLLM: One Framework to Align All Modalities with Language
Jiaming Han
Kaixiong Gong
Yiyuan Zhang
Jiaqi Wang
Kaipeng Zhang
D. Lin
Yu Qiao
Peng Gao
Xiangyu Yue
MLLM
101
102
0
10 Jan 2025
Language Model Can Listen While Speaking
Ziyang Ma
Yakun Song
Chenpeng Du
Jian Cong
Zhuo Chen
Yuping Wang
Y. Wang
Xie Chen
AuLLM
29
23
0
05 Aug 2024
Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
Bin Lin
Yang Ye
Bin Zhu
Jiaxi Cui
Munan Ning
Peng Jin
Li-ming Yuan
VLM
MLLM
185
576
0
16 Nov 2023
mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Qinghao Ye
Haiyang Xu
Guohai Xu
Jiabo Ye
Ming Yan
...
Junfeng Tian
Qiang Qi
Ji Zhang
Feiyan Huang
Jingren Zhou
VLM
MLLM
203
883
0
27 Apr 2023
1