ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.05667
  4. Cited By
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations

What Makes a Good Explanation?: A Harmonized View of Properties of Explanations

10 November 2022
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
    XAI
    FAtt
ArXivPDFHTML

Papers citing "What Makes a Good Explanation?: A Harmonized View of Properties of Explanations"

21 / 21 papers shown
Title
On Explaining (Large) Language Models For Code Using Global Code-Based Explanations
On Explaining (Large) Language Models For Code Using Global Code-Based Explanations
David Nader-Palacio
Dipin Khati
Daniel Rodríguez-Cárdenas
Alejandro Velasco
Denys Poshyvanyk
LRM
42
0
0
21 Mar 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
93
1
0
13 Mar 2025
Rethinking LLM Bias Probing Using Lessons from the Social Sciences
Kirsten N. Morehouse
S. Swaroop
Weiwei Pan
43
0
0
28 Feb 2025
Directly Optimizing Explanations for Desired Properties
Directly Optimizing Explanations for Desired Properties
Hiwot Belay Tadesse
Alihan Hüyük
Weiwei Pan
Finale Doshi-Velez
FAtt
36
0
0
31 Oct 2024
The Quest for the Right Mediator: A History, Survey, and Theoretical
  Grounding of Causal Interpretability
The Quest for the Right Mediator: A History, Survey, and Theoretical Grounding of Causal Interpretability
Aaron Mueller
Jannik Brinkmann
Millicent Li
Samuel Marks
Koyena Pal
...
Arnab Sen Sharma
Jiuding Sun
Eric Todd
David Bau
Yonatan Belinkov
CML
42
18
0
02 Aug 2024
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for
  Explaining Language Model Predictions
Helpful or Harmful Data? Fine-tuning-free Shapley Attribution for Explaining Language Model Predictions
Jingtan Wang
Xiaoqiang Lin
Rui Qiao
Chuan-Sheng Foo
Bryan Kian Hsiang Low
TDI
35
3
0
07 Jun 2024
A Sim2Real Approach for Identifying Task-Relevant Properties in
  Interpretable Machine Learning
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
40
0
0
31 May 2024
Explaining Arguments' Strength: Unveiling the Role of Attacks and
  Supports (Technical Report)
Explaining Arguments' Strength: Unveiling the Role of Attacks and Supports (Technical Report)
Xiang Yin
Potyka Nico
Francesca Toni
FAtt
20
3
0
22 Apr 2024
The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
The Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment
Nari Johnson
Sanika Moharana
Christina Harrington
Nazanin Andalibi
Hoda Heidari
Motahhare Eslami
24
7
0
21 Apr 2024
Investigating the Impact of Model Instability on Explanations and
  Uncertainty
Investigating the Impact of Model Instability on Explanations and Uncertainty
Sara Vera Marjanović
Isabelle Augenstein
Christina Lioma
AAML
35
0
0
20 Feb 2024
CAFE: Conflict-Aware Feature-wise Explanations
CAFE: Conflict-Aware Feature-wise Explanations
Adam Dejl
Hamed Ayoobi
Matthew Williams
Francesca Toni
FAtt
BDL
13
2
0
31 Oct 2023
Argument Attribution Explanations in Quantitative Bipolar Argumentation
  Frameworks (Technical Report)
Argument Attribution Explanations in Quantitative Bipolar Argumentation Frameworks (Technical Report)
Xiang Yin
Nico Potyka
Francesca Toni
13
7
0
25 Jul 2023
Interpretable Regional Descriptors: Hyperbox-Based Local Explanations
Interpretable Regional Descriptors: Hyperbox-Based Local Explanations
Susanne Dandl
Giuseppe Casalicchio
Bernd Bischl
Ludwig Bothmann
21
8
0
04 May 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
47
3
0
30 Mar 2023
Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the
  XAI Playbook
Who's Thinking? A Push for Human-Centered Evaluation of LLMs using the XAI Playbook
Teresa Datta
John P. Dickerson
31
9
0
10 Mar 2023
COMET: Neural Cost Model Explanation Framework
COMET: Neural Cost Model Explanation Framework
Isha Chaudhary
Alex Renda
Charith Mendis
Gagandeep Singh
14
2
0
14 Feb 2023
Explaining Image Classification with Visual Debates
Explaining Image Classification with Visual Debates
Avinash Kori
Ben Glocker
Francesca Toni
25
1
0
17 Oct 2022
Explainability in Graph Neural Networks: A Taxonomic Survey
Explainability in Graph Neural Networks: A Taxonomic Survey
Hao Yuan
Haiyang Yu
Shurui Gui
Shuiwang Ji
162
590
0
31 Dec 2020
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
137
656
0
28 Dec 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
120
297
0
17 Oct 2019
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
282
39,170
0
01 Sep 2014
1