Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.03295
Cited By
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
6 May 2022
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations"
40 / 40 papers shown
Title
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
39
0
0
02 May 2025
In defence of post-hoc explanations in medical AI
Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
26
0
0
29 Apr 2025
Directly Optimizing Explanations for Desired Properties
Hiwot Belay Tadesse
Alihan Hüyük
Weiwei Pan
Finale Doshi-Velez
FAtt
31
0
0
31 Oct 2024
ConLUX: Concept-Based Local Unified Explanations
Junhao Liu
Haonan Yu
Xin Zhang
FAtt
LRM
23
0
0
16 Oct 2024
Developing Guidelines for Functionally-Grounded Evaluation of Explainable Artificial Intelligence using Tabular Data
M. Velmurugan
Chun Ouyang
Yue Xu
Renuka Sindhgatta
B. Wickramanayake
Catarina Moreira
ELM
LMTD
XAI
14
0
0
30 Sep 2024
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
Gianmario Voria
Giulia Sellitto
Carmine Ferrara
Francesco Abate
A. Lucia
F. Ferrucci
Gemma Catolino
Fabio Palomba
FaML
29
3
0
29 Aug 2024
To which reference class do you belong? Measuring racial fairness of reference classes with normative modeling
S. Rutherford
T. Wolfers
Charlotte J. Fraza
Nathaniel G. Harrnet
Christian F. Beckmann
H. Ruhé
A. Marquand
CML
31
2
0
26 Jul 2024
A Sim2Real Approach for Identifying Task-Relevant Properties in Interpretable Machine Learning
Eura Nofshin
Esther Brown
Brian Lim
Weiwei Pan
Finale Doshi-Velez
29
0
0
31 May 2024
SIDEs: Separating Idealization from Deceptive Explanations in xAI
Emily Sullivan
36
2
0
25 Apr 2024
Evaluating Physician-AI Interaction for Cancer Management: Paving the Path towards Precision Oncology
Zeshan Hussain
Barbara D. Lam
Fernando A. Acosta-Perez
I. Riaz
Maia L. Jacobs
Andrew J. Yee
David Sontag
33
0
0
23 Apr 2024
From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap
Tianqi Kou
32
0
0
19 Apr 2024
Accurate estimation of feature importance faithfulness for tree models
Mateusz Gajewski
Adam Karczmarz
Mateusz Rapicki
Piotr Sankowski
27
0
0
04 Apr 2024
On Explaining Unfairness: An Overview
Christos Fragkathoulas
Vasiliki Papanikou
Danae Pla Karidi
E. Pitoura
XAI
FaML
19
2
0
16 Feb 2024
Understanding Disparities in Post Hoc Machine Learning Explanation
Vishwali Mhasawade
Salman Rahman
Zoe Haskell-Craig
R. Chunara
13
4
0
25 Jan 2024
SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning
Julien Ferry
Ulrich Aivodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
13
5
0
22 Dec 2023
NLP for Maternal Healthcare: Perspectives and Guiding Principles in the Age of LLMs
Maria Antoniak
Aakanksha Naik
Carla S. Alvarado
Lucy Lu Wang
Irene Y. Chen
AILaw
13
14
0
19 Dec 2023
Error Discovery by Clustering Influence Embeddings
Fulton Wang
Julius Adebayo
Sarah Tan
Diego Garcia-Olano
Narine Kokhlikyan
11
3
0
07 Dec 2023
SoK: Unintended Interactions among Machine Learning Defenses and Risks
Vasisht Duddu
S. Szyller
Nadarajah Asokan
AAML
42
2
0
07 Dec 2023
LUCID-GAN: Conditional Generative Models to Locate Unfairness
Andres Algaba
Carmen Mazijn
Carina E. A. Prunkl
J. Danckaert
Vincent Ginis
SyDa
24
1
0
28 Jul 2023
Towards an AI Accountability Policy
Przemyslaw A. Grabowicz
Nicholas Perello
Yair Zick
Nicholas Perello
Yair Zick
12
0
0
25 Jul 2023
Simple Steps to Success: Axiomatics of Distance-Based Algorithmic Recourse
Jenny Hamer
Jake Valladares
Vignesh Viswanathan
Yair Zick
17
0
0
27 Jun 2023
Reason to explain: Interactive contrastive explanations (REASONX)
Laura State
Salvatore Ruggieri
Franco Turini
LRM
14
1
0
29 May 2023
Evaluating the Impact of Social Determinants on Health Prediction in the Intensive Care Unit
M. Yang
Gloria Hyunjung Kwak
Tom Pollard
L. A. Celi
Marzyeh Ghassemi
14
10
0
22 May 2023
MLHOps: Machine Learning for Healthcare Operations
Kristoffer Larsen
Vallijah Subasri
A. Krishnan
Cláudio Tinoco Mesquita
Diana Paez
Laleh Seyyed-Kalantari
Amalia Peix
LM&MA
AI4TS
VLM
16
2
0
04 May 2023
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
6
37
0
20 Apr 2023
Feature Importance Disparities for Data Bias Investigations
Peter W. Chang
Leor Fishman
Seth Neel
6
1
0
03 Mar 2023
On the Impact of Explanations on Understanding of Algorithmic Decision-Making
Timothée Schmude
Laura M. Koesten
Torsten Moller
Sebastian Tschiatschek
14
15
0
16 Feb 2023
Tensions Between the Proxies of Human Values in AI
Teresa Datta
D. Nissani
Max Cembalest
Akash Khanna
Haley Massa
John P. Dickerson
11
2
0
14 Dec 2022
"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of Counterfactual Explanations
André Artelt
Barbara Hammer
FaML
13
8
0
27 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
14
17
0
10 Nov 2022
A survey of Identification and mitigation of Machine Learning algorithmic biases in Image Analysis
Laurent Risser
Agustin Picard
Lucas Hervier
Jean-Michel Loubes
FaML
25
5
0
10 Oct 2022
LUCID: Exposing Algorithmic Bias through Inverse Design
Carmen Mazijn
Carina E. A. Prunkl
Andres Algaba
J. Danckaert
Vincent Ginis
SyDa
19
4
0
26 Aug 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
6
140
0
22 Jun 2022
Saliency Cards: A Framework to Characterize and Compare Saliency Methods
Angie Boggust
Harini Suresh
Hendrik Strobelt
John Guttag
Arvindmani Satyanarayan
FAtt
XAI
16
8
0
07 Jun 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aivodji
Stephen H. Bach
Himabindu Lakkaraju
35
56
0
15 May 2022
In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction
Caroline Linjun Wang
Bin Han
Bhrij Patel
Cynthia Rudin
FaML
HAI
57
83
0
08 May 2020
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,187
0
23 Aug 2019
Learning Adversarially Fair and Transferable Representations
David Madras
Elliot Creager
T. Pitassi
R. Zemel
FaML
210
669
0
17 Feb 2018
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,672
0
28 Feb 2017
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,082
0
24 Oct 2016
1