ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.03184
  4. Cited By
HIVE: Evaluating the Human Interpretability of Visual Explanations

HIVE: Evaluating the Human Interpretability of Visual Explanations

6 December 2021
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
ArXivPDFHTML

Papers citing "HIVE: Evaluating the Human Interpretability of Visual Explanations"

19 / 19 papers shown
Title
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
49
0
0
14 Apr 2025
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models
Itay Benou
Tammy Riklin-Raviv
56
0
0
27 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
91
7
0
06 Feb 2025
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
43
0
0
16 Sep 2024
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation
Hugo Porta
Emanuele Dalsasso
Diego Marcos
D. Tuia
87
0
0
14 Sep 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation
  Methods in Vision Transformers
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
29
0
0
17 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
36
2
0
11 Jun 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
36
5
0
02 May 2024
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition
Chong Wang
Yuanhong Chen
Fengbei Liu
Yuyuan Liu
Davis J. McCarthy
Helen Frazer
Gustavo Carneiro
16
1
0
30 Nov 2023
Zero-shot Translation of Attention Patterns in VQA Models to Natural
  Language
Zero-shot Translation of Attention Patterns in VQA Models to Natural Language
Leonard Salewski
A. Sophia Koepke
Hendrik P. A. Lensch
Zeynep Akata
25
2
0
08 Nov 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
19
32
0
11 Aug 2023
Precise Benchmarking of Explainable AI Attribution Methods
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
11
4
0
06 Aug 2023
Towards a Praxis for Intercultural Ethics in Explainable AI
Towards a Praxis for Intercultural Ethics in Explainable AI
Chinasa T. Okolo
21
2
0
24 Apr 2023
UFO: A unified method for controlling Understandability and Faithfulness
  Objectives in concept-based explanations for CNNs
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
8
0
0
27 Mar 2023
ICICLE: Interpretable Class Incremental Continual Learning
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
13
28
0
14 Mar 2023
Understanding the Role of Human Intuition on Reliance in Human-AI
  Decision-Making with Explanations
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Valerie Chen
Q. V. Liao
Jennifer Wortman Vaughan
Gagan Bansal
21
102
0
18 Jan 2023
Revisiting The Evaluation of Class Activation Mapping for
  Explainability: A Novel Metric and Experimental Analysis
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
125
30
0
20 Apr 2021
Estimating Example Difficulty Using Variance of Gradients
Estimating Example Difficulty Using Variance of Gradients
Chirag Agarwal
Daniel D'souza
Sara Hooker
190
103
0
26 Aug 2020
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
222
3,658
0
28 Feb 2017
1