We introduce CHARTOM, a visual theory-of-mind benchmark for multimodal large language models. CHARTOM consists of specially designed data visualizing charts. Given a chart, a language model needs to not only correctly comprehend the chart (the FACT question) but also judge if the chart will be misleading to a human reader (the MIND question). Both questions have significant societal benefits. We detail the construction of the CHARTOM benchmark including its calibration on human performance. We benchmark leading LLMs as of late 2024 - including GPT, Claude, Gemini, Qwen, Llama, and Llava - on the CHARTOM dataset and found that our benchmark was challenging to all of them, suggesting room for future large language models to improve.
View on arXiv@article{bharti2025_2408.14419, title={ CHARTOM: A Visual Theory-of-Mind Benchmark for Multimodal Large Language Models }, author={ Shubham Bharti and Shiyun Cheng and Jihyun Rho and Jianrui Zhang and Mu Cai and Yong Jae Lee and Martina Rau and Xiaojin Zhu }, journal={arXiv preprint arXiv:2408.14419}, year={ 2025 } }