ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.18624
46
3

Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations?

29 April 2024
Letitia Parcalabescu
Anette Frank
    MLLM
    CoGe
    VLM
ArXivPDFHTML
Abstract

Vision and language model (VLM) decoders are currently the best-performing architectures on multimodal tasks. Next to answers, they are able to produce natural language explanations, either in post-hoc or CoT settings. However, it is not clear to what extent they are using the input vision and text modalities when generating answers or explanations. In this work, we investigate if VLMs rely on their input modalities differently when they produce explanations as opposed to answers. We also evaluate the self-consistency of VLM decoders in both post-hoc and CoT explanation settings, by extending existing unimodal tests and measures to VLM decoders. We find that most tested VLMs are less self-consistent than LLMs. Text contributions in all tested VL decoders are more important than image contributions in all examined tasks. However, when comparing explanation generation to answer generation, the contributions of images are significantly stronger for generating explanations compared to answers. This difference is even larger in CoT compared to post-hoc explanations. Lastly, we provide an up-to-date benchmarking of state-of-the-art VL decoders on the VALSE benchmark, which before was restricted to VL encoders. We find that the tested VL decoders still struggle with most phenomena tested by VALSE.

View on arXiv
@article{parcalabescu2025_2404.18624,
  title={ Do Vision & Language Decoders use Images and Text equally? How Self-consistent are their Explanations? },
  author={ Letitia Parcalabescu and Anette Frank },
  journal={arXiv preprint arXiv:2404.18624},
  year={ 2025 }
}
Comments on this paper