Convincing Rationales for Visual Question Answering Reasoning

Visual Question Answering (VQA) is a challenging task of predicting the answer to a question about the content of an image. It requires deep understanding of both the textual question and visual image. Prior works directly evaluate the answering models by simply calculating the accuracy of the predicted answers. However, the inner reasoning behind the prediction is disregarded in such a "black box" system, and we do not even know if one can trust the predictions. In some cases, the models still get the correct answers even when they focus on irrelevant visual regions or textual tokens, which makes the models unreliable and illogical. To generate both visual and textual rationales next to the predicted answer to the given image/question pair, we propose Multimodal Rationales for VQA, MRVQA. Considering the extra annotations brought by the new outputs, MRVQA is trained and evaluated by samples converted from some existing VQA datasets and their visual labels. The extensive experiments demonstrate that the visual and textual rationales support the prediction of the answers, and further improve the accuracy. Furthermore, MRVQA achieves competitive performance on generic VQA datatsets in the zero-shot evaluation setting. The dataset and source code will be released underthis https URL.
View on arXiv@article{li2025_2402.03896, title={ Convincing Rationales for Visual Question Answering Reasoning }, author={ Kun Li and George Vosselman and Michael Ying Yang }, journal={arXiv preprint arXiv:2402.03896}, year={ 2025 } }