Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2307.12626
Cited By
Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework
24 July 2023
Jingxuan Wei
Cheng Tan
Zhangyang Gao
Linzhuang Sun
Siyuan Li
Bihui Yu
R. Guo
Stan Z. Li
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Enhancing Human-like Multi-Modal Reasoning: A New Challenging Dataset and Comprehensive Framework"
4 / 4 papers shown
Title
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
Pan Lu
Swaroop Mishra
Tony Xia
Liang Qiu
Kai-Wei Chang
Song-Chun Zhu
Oyvind Tafjord
Peter Clark
A. Kalyan
ELM
ReLM
LRM
207
1,089
0
20 Sep 2022
VQA-MHUG: A Gaze Dataset to Study Multimodal Neural Attention in Visual Question Answering
Ekta Sood
Fabian Kögel
Florian Strohm
Prajit Dhar
Andreas Bulling
16
19
0
27 Sep 2021
Co-learning: Learning from Noisy Labels with Self-supervision
Cheng Tan
Jun-Xiong Xia
Lirong Wu
Stan Z. Li
NoLa
65
114
0
05 Aug 2021
A survey on VQA_Datasets and Approaches
Yeyun Zou
Qiyu Xie
40
18
0
02 May 2021
1