ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.03295
  4. Cited By
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations

The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations

6 May 2022
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
ArXivPDFHTML

Papers citing "The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations"

16 / 16 papers shown
Title
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Gender Bias in Explainability: Investigating Performance Disparity in Post-hoc Methods
Mahdi Dhaini
Ege Erdogan
Nils Feldhus
Gjergji Kasneci
39
0
0
02 May 2025
In defence of post-hoc explanations in medical AI
In defence of post-hoc explanations in medical AI
Joshua Hatherley
Lauritz Munch
Jens Christian Bjerring
26
0
0
29 Apr 2025
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
A Catalog of Fairness-Aware Practices in Machine Learning Engineering
Gianmario Voria
Giulia Sellitto
Carmine Ferrara
Francesco Abate
A. Lucia
F. Ferrucci
Gemma Catolino
Fabio Palomba
FaML
29
3
0
29 Aug 2024
To which reference class do you belong? Measuring racial fairness of reference classes with normative modeling
To which reference class do you belong? Measuring racial fairness of reference classes with normative modeling
S. Rutherford
T. Wolfers
Charlotte J. Fraza
Nathaniel G. Harrnet
Christian F. Beckmann
H. Ruhé
A. Marquand
CML
29
2
0
26 Jul 2024
From Model Performance to Claim: How a Change of Focus in Machine
  Learning Replicability Can Help Bridge the Responsibility Gap
From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap
Tianqi Kou
32
0
0
19 Apr 2024
Accurate estimation of feature importance faithfulness for tree models
Accurate estimation of feature importance faithfulness for tree models
Mateusz Gajewski
Adam Karczmarz
Mateusz Rapicki
Piotr Sankowski
27
0
0
04 Apr 2024
LUCID-GAN: Conditional Generative Models to Locate Unfairness
LUCID-GAN: Conditional Generative Models to Locate Unfairness
Andres Algaba
Carmen Mazijn
Carina E. A. Prunkl
J. Danckaert
Vincent Ginis
SyDa
21
1
0
28 Jul 2023
Reason to explain: Interactive contrastive explanations (REASONX)
Reason to explain: Interactive contrastive explanations (REASONX)
Laura State
Salvatore Ruggieri
Franco Turini
LRM
11
1
0
29 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
6
37
0
20 Apr 2023
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
14
17
0
10 Nov 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aivodji
Stephen H. Bach
Himabindu Lakkaraju
35
55
0
15 May 2022
In Pursuit of Interpretable, Fair and Accurate Machine Learning for
  Criminal Recidivism Prediction
In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction
Caroline Linjun Wang
Bin Han
Bhrij Patel
Cynthia Rudin
FaML
HAI
57
83
0
08 May 2020
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,143
0
23 Aug 2019
Learning Adversarially Fair and Transferable Representations
Learning Adversarially Fair and Transferable Representations
David Madras
Elliot Creager
T. Pitassi
R. Zemel
FaML
210
663
0
17 Feb 2018
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
225
3,658
0
28 Feb 2017
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,079
0
24 Oct 2016
1