ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.07985
  4. Cited By
Interpreting Interpretations: Organizing Attribution Methods by Criteria
v1v2 (latest)

Interpreting Interpretations: Organizing Attribution Methods by Criteria

19 February 2020
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
Matt Fredrikson
    XAIFAtt
ArXiv (abs)PDFHTML

Papers citing "Interpreting Interpretations: Organizing Attribution Methods by Criteria"

9 / 9 papers shown
Smart Sensor Placement: A Correlation-Aware Attribution Framework (CAAF) for Real-world Data Modeling
Smart Sensor Placement: A Correlation-Aware Attribution Framework (CAAF) for Real-world Data Modeling
Sze Chai Leung
Di Zhou
H. J. Bae
OOD
135
1
0
26 Oct 2025
Interpretability is in the eye of the beholder: Human versus artificial
  classification of image segments generated by humans versus XAI
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAIInternational journal of human computer interactions (IJHCI), 2023
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
295
7
0
21 Nov 2023
Mapping Knowledge Representations to Concepts: A Review and New
  Perspectives
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
Lars Holmberg
P. Davidsson
Per Linde
288
2
0
31 Dec 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAIFAtt
475
24
0
10 Nov 2022
Towards Benchmarking Explainable Artificial Intelligence Methods
Towards Benchmarking Explainable Artificial Intelligence Methods
Lars Holmberg
203
5
0
25 Aug 2022
Faithful Explanations for Deep Graph Models
Faithful Explanations for Deep Graph Models
Zifan Wang
Yuhang Yao
Chaoran Zhang
Han Zhang
Youjie Kang
Carlee Joe-Wong
Matt Fredrikson
Anupam Datta
FAtt
336
2
0
24 May 2022
Robust Models Are More Interpretable Because Attributions Look Normal
Robust Models Are More Interpretable Because Attributions Look NormalInternational Conference on Machine Learning (ICML), 2021
Zifan Wang
Matt Fredrikson
Anupam Datta
OODFAtt
382
32
0
20 Mar 2021
Reconstructing Actions To Explain Deep Reinforcement Learning
Reconstructing Actions To Explain Deep Reinforcement Learning
Xuan Chen
Zifan Wang
Yucai Fan
Bonan Jin
Piotr (Peter) Mardziel
Carlee Joe-Wong
Anupam Datta
FAtt
316
2
0
17 Sep 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model ExplanationsInternational Joint Conference on Artificial Intelligence (IJCAI), 2020
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
371
281
0
01 May 2020
1
Page 1 of 1