39
0

Attention, Please! PixelSHAP Reveals What Vision-Language Models Actually Focus On

Abstract

Interpretability in Vision-Language Models (VLMs) is crucial for trust, debugging, and decision-making in high-stakes applications. We introduce PixelSHAP, a model-agnostic framework extending Shapley-based analysis to structured visual entities. Unlike previous methods focusing on text prompts, PixelSHAP applies to vision-based reasoning by systematically perturbing image objects and quantifying their influence on a VLM's response. PixelSHAP requires no model internals, operating solely on input-output pairs, making it compatible with open-source and commercial models. It supports diverse embedding-based similarity metrics and scales efficiently using optimization techniques inspired by Shapley-based methods. We validate PixelSHAP in autonomous driving, highlighting its ability to enhance interpretability. Key challenges include segmentation sensitivity and object occlusion. Our open-source implementation facilitates further research.

View on arXiv
@article{goldshmidt2025_2503.06670,
  title={ Attention, Please! PixelSHAP Reveals What Vision-Language Models Actually Focus On },
  author={ Roni Goldshmidt },
  journal={arXiv preprint arXiv:2503.06670},
  year={ 2025 }
}
Comments on this paper