101
v1v2 (latest)

Towards Artwork Explanation in Large-scale Vision Language Models

Kazuki Hayashi
Yusuke Sakai
Hidetaka Kamigaito
Katsuhiko Hayashi
Taro Watanabe
Main:7 Pages
8 Figures
Bibliography:4 Pages
19 Tables
Appendix:16 Pages
Abstract

Large-scale Vision-Language Models (LVLMs) output text from images and instructions, demonstrating capabilities in text generation and comprehension. However, it has not been clarified to what extent LVLMs possess the ability to understand the knowledge necessary for explaining images, the complex relationships between various pieces of knowledge, and how they integrate these understandings into their explanations. To address this issue, we propose a new task: the artwork explanation generation task, along with its evaluation dataset and metrics for quantitatively assessing the understanding and utilization of knowledge about artworks. This task is apt for image description based on the premise that LVLMs are expected to have pre-existing knowledge of artworks, which are often subjects of wide recognition and documented information. It consists of two parts: generating explanations from images and titles of artworks, and generating explanations using only images, thus evaluating the LVLMs' language-based and vision-based knowledge. Alongside, we release a training dataset for LVLMs to learn explanations that incorporate knowledge about artworks. Our findings indicate that LVLMs not only struggle with integrating language and visual information but also exhibit a more pronounced limitation in acquiring knowledge from images alone.

View on arXiv
Comments on this paper