ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.04609
  4. Cited By
VISTA: A Visual and Textual Attention Dataset for Interpreting
  Multimodal Models

VISTA: A Visual and Textual Attention Dataset for Interpreting Multimodal Models

6 October 2024
Harshit
Tolga Tasdizen
    CoGe
    VLM
ArXivPDFHTML

Papers citing "VISTA: A Visual and Textual Attention Dataset for Interpreting Multimodal Models"

2 / 2 papers shown
Title
Explainable and Interpretable Multimodal Large Language Models: A
  Comprehensive Survey
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey
Yunkai Dang
Kaichen Huang
Jiahao Huo
Yibo Yan
S. Huang
...
Kun Wang
Yong Liu
Jing Shao
Hui Xiong
Xuming Hu
LRM
96
14
0
03 Dec 2024
LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
Walid Bousselham
Angie Boggust
Sofian Chaybouti
Hendrik Strobelt
Hilde Kuehne
83
10
0
04 Apr 2024
1