ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09749
  4. Cited By
Fairwashing: the risk of rationalization
v1v2v3 (latest)

Fairwashing: the risk of rationalization

28 January 2019
Ulrich Aïvodji
Hiromi Arai
O. Fortineau
Sébastien Gambs
Satoshi Hara
Alain Tapp
    FaML
ArXiv (abs)PDFHTML

Papers citing "Fairwashing: the risk of rationalization"

50 / 87 papers shown
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Joshua S. Harvey
Guanchao Feng
Sai Anusha Meesala
Tina Zhao
Dhagash Mehta
FAttCML
429
0
0
31 Oct 2025
Restricted Receptive Fields for Face Verification
Restricted Receptive Fields for Face Verification
Kagan Öztürk
Aman Bhatta
Haiyu Wu
Patrick Flynn
Kevin W. Bowyer
CVBMFAtt
286
0
0
12 Oct 2025
Secure human oversight of AI: Threat modeling in a socio-technical context
Secure human oversight of AI: Threat modeling in a socio-technical context
Jonas C. Ditz
Veronika Lazar
Elmar Lichtmeß
Carola Plesch
Matthias Heck
Kevin Baum
Markus Langer
AAML
324
1
0
15 Sep 2025
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI Act
Red Teaming AI Policy: A Taxonomy of Avoision and the EU AI ActConference on Fairness, Accountability and Transparency (FAccT), 2025
Rui-Jie Yew
Bill Marino
Suresh Venkatasubramanian
194
7
0
02 Jun 2025
Reality Check: A New Evaluation Ecosystem Is Necessary to Understand AI's Real World Effects
Reality Check: A New Evaluation Ecosystem Is Necessary to Understand AI's Real World Effects
Reva Schwartz
Rumman Chowdhury
Akash Kundu
Heather Frase
Marzieh Fadaee
...
Andrew Thompson
Maya Carlyle
Qinghua Lu
Matthew Holmes
Theodora Skeadas
472
16
0
24 May 2025
Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness
Fair Play for Individuals, Foul Play for Groups? Auditing Anonymization's Impact on ML Fairness
Héber H. Arcolezi
Mina Alishahi
Adda-Akram Bendoukha
Nesrine Kaaniche
253
1
0
12 May 2025
Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy
Crowding Out The Noise: Algorithmic Collective Action Under Differential Privacy
Rushabh Solanki
Meghana Bhange
Ulrich Aïvodji
Elliot Creager
314
4
0
09 May 2025
Beware of "Explanations" of AI
Beware of "Explanations" of AI
David Martens
Galit Shmueli
Theodoros Evgeniou
Kevin Bauer
Christian Janiesch
...
Claudia Perlich
Wouter Verbeke
Alona Zharova
Patrick Zschech
F. Provost
432
5
0
09 Apr 2025
Fairness and Sparsity within Rashomon sets: Enumeration-Free Exploration and Characterization
Fairness and Sparsity within Rashomon sets: Enumeration-Free Exploration and Characterization
Lucas Langlade
Julien Ferry
Gabriel Laberge
Thibaut Vidal
334
4
0
07 Feb 2025
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Chhavi Yadav
Evan Monroe Laufer
Dan Boneh
Kamalika Chaudhuri
577
3
0
06 Feb 2025
Feature Responsiveness Scores: Model-Agnostic Explanations for Recourse
Feature Responsiveness Scores: Model-Agnostic Explanations for RecourseInternational Conference on Learning Representations (ICLR), 2024
Seung Hyun Cheon
Anneke Wernerfelt
Sorelle A. Friedler
Berk Ustun
FaMLFAtt
685
8
0
29 Oct 2024
Why do explanations fail? A typology and discussion on failures in XAI
Why do explanations fail? A typology and discussion on failures in XAI
Clara Bove
Thibault Laugel
Marie-Jeanne Lesot
C. Tijus
Marcin Detyniecki
315
9
0
22 May 2024
Mapping the Potential of Explainable AI for Fairness Along the AI
  Lifecycle
Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle
Luca Deck
Astrid Schomacker
Timo Speith
Jakob Schöffer
Lena Kästner
Niklas Kühl
605
5
0
29 Apr 2024
SIDEs: Separating Idealization from Deceptive Explanations in xAI
SIDEs: Separating Idealization from Deceptive Explanations in xAI
Emily Sullivan
343
6
0
25 Apr 2024
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
AAMLBDL
622
1
0
20 Mar 2024
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
Fairness Feedback Loops: Training on Synthetic Data Amplifies BiasConference on Fairness, Accountability and Transparency (FAccT), 2024
Sierra Wyllie
Ilia Shumailov
Nicolas Papernot
290
61
0
12 Mar 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI AuditsConference on Fairness, Accountability and Transparency (FAccT), 2024
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
722
153
0
25 Jan 2024
X Hacking: The Threat of Misguided AutoML
X Hacking: The Threat of Misguided AutoML
Rahul Sharma
Sergey Redyuk
Sumantrak Mukherjee
Andrea Sipka
Eyke Hüllermeier
Sebastian Vollmer
David Selby
557
5
0
16 Jan 2024
SoK: Taming the Triangle -- On the Interplays between Fairness,
  Interpretability and Privacy in Machine Learning
SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
FaML
364
7
0
22 Dec 2023
Survey on AI Ethics: A Socio-technical Perspective
Survey on AI Ethics: A Socio-technical PerspectiveInternational Conference on Climate Informatics (ICCI), 2023
Dave Mbiazi
Meghana Bhange
Maryam Babaei
Ivaxi Sheth
Patrik Kenfack
Samira Ebrahimi Kahou
472
11
0
28 Nov 2023
Fair Enough? A map of the current limitations of the requirements to
  have "fair" algorithms
Fair Enough? A map of the current limitations of the requirements to have "fair" algorithms
Alessandro Castelnovo
Nicole Inverardi
Gabriele Nanino
Ilaria Giuseppina Penco
D. Regoli
FaML
326
4
0
21 Nov 2023
Characterizing Large Language Models as Rationalizers of
  Knowledge-intensive Tasks
Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks
Aditi Mishra
Sajjadur Rahman
H. Kim
Kushan Mitra
Estevam R. Hruschka
428
12
0
09 Nov 2023
A Critical Survey on Fairness Benefits of Explainable AI
A Critical Survey on Fairness Benefits of Explainable AI
Luca Deck
Jakob Schoeffer
Maria De-Arteaga
Niklas Kühl
663
47
0
15 Oct 2023
One Model Many Scores: Using Multiverse Analysis to Prevent Fairness
  Hacking and Evaluate the Influence of Model Design Decisions
One Model Many Scores: Using Multiverse Analysis to Prevent Fairness Hacking and Evaluate the Influence of Model Design DecisionsConference on Fairness, Accountability and Transparency (FAccT), 2023
Jan Simson
Florian Pfisterer
Christoph Kern
396
17
0
31 Aug 2023
Probabilistic Dataset Reconstruction from Interpretable Models
Probabilistic Dataset Reconstruction from Interpretable Models
Julien Ferry
Ulrich Aïvodji
Sébastien Gambs
Marie-José Huguet
Mohamed Siala
254
9
0
29 Aug 2023
Fairness Explainability using Optimal Transport with Applications in
  Image Classification
Fairness Explainability using Optimal Transport with Applications in Image Classification
Philipp Ratz
Franccois Hu
Arthur Charpentier
242
1
0
22 Aug 2023
Manipulation Risks in Explainable AI: The Implications of the
  Disagreement Problem
Manipulation Risks in Explainable AI: The Implications of the Disagreement Problem
S. Goethals
David Martens
Theodoros Evgeniou
365
13
0
24 Jun 2023
Don't trust your eyes: on the (un)reliability of feature visualizations
Don't trust your eyes: on the (un)reliability of feature visualizationsInternational Conference on Machine Learning (ICML), 2023
Robert Geirhos
Roland S. Zimmermann
Blair Bilodeau
Wieland Brendel
Been Kim
FAttOOD
561
38
0
07 Jun 2023
Adversarial attacks and defenses in explainable artificial intelligence: A survey
Adversarial attacks and defenses in explainable artificial intelligence: A surveyInformation Fusion (Inf. Fusion), 2023
Hubert Baniecki
P. Biecek
AAML
635
141
0
06 Jun 2023
Leveraging Imperfect Sources to Detect Fairwashing in Black-Box Auditing
Leveraging Imperfect Sources to Detect Fairwashing in Black-Box Auditing
Jade Garcia Bourrée
Erwan Le Merrer
Gilles Tredan
Benoit Rottembourg
MLAUHILM
170
1
0
23 May 2023
Connecting the Dots in Trustworthy Artificial Intelligence: From AI
  Principles, Ethics, and Key Requirements to Responsible AI Systems and
  Regulation
Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and RegulationInformation Fusion (Inf. Fusion), 2023
Natalia Díaz Rodríguez
Javier Del Ser
Mark Coeckelbergh
Marcos López de Prado
E. Herrera-Viedma
Francisco Herrera
XAI
390
528
0
02 May 2023
Disagreement amongst counterfactual explanations: How transparency can
  be deceptive
Disagreement amongst counterfactual explanations: How transparency can be deceptive
Dieter Brughmans
Lissa Melis
David Martens
231
7
0
25 Apr 2023
Learning Optimal Fair Scoring Systems for Multi-Class Classification
Learning Optimal Fair Scoring Systems for Multi-Class ClassificationIEEE International Conference on Tools with Artificial Intelligence (ICTAI), 2022
Julien Rouzot
Julien Ferry
Marie-José Huguet
FaML
365
12
0
11 Apr 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
502
10
0
30 Mar 2023
Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods
Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods
Julien Ferry
Gabriel Laberge
Ulrich Aïvodji
325
8
0
08 Mar 2023
Five policy uses of algorithmic transparency and explainability
Five policy uses of algorithmic transparency and explainability
Matthew R. O’Shaughnessy
478
1
0
06 Feb 2023
Explainable AI does not provide the explanations end-users are asking
  for
Explainable AI does not provide the explanations end-users are asking for
Savio Rozario
G. Cevora
XAI
232
3
0
25 Jan 2023
Tensions Between the Proxies of Human Values in AI
Tensions Between the Proxies of Human Values in AI
Teresa Datta
D. Nissani
Max Cembalest
Akash Khanna
Haley Massa
John P. Dickerson
244
4
0
14 Dec 2022
Explanations, Fairness, and Appropriate Reliance in Human-AI
  Decision-Making
Explanations, Fairness, and Appropriate Reliance in Human-AI Decision-MakingInternational Conference on Human Factors in Computing Systems (CHI), 2022
Jakob Schoeffer
Maria De-Arteaga
Niklas Kuehl
FaML
553
84
0
23 Sep 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security ApplicationsEuropean Symposium on Security and Privacy (Euro S&P), 2022
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
418
67
0
22 Aug 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
619
190
0
22 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of
  NLP Models
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
357
42
0
08 Jun 2022
Fool SHAP with Stealthily Biased Sampling
Fool SHAP with Stealthily Biased Sampling
Gabriel Laberge
Ulrich Aïvodji
Satoshi Hara
M. Marchand
Foutse Khomh
MLAUAAMLFAtt
254
5
0
30 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc ExplanationsAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2022
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
326
72
0
15 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of ExplanationsConference on Fairness, Accountability and Transparency (FAccT), 2022
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
335
99
0
06 May 2022
A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms
A Sandbox Tool to Bias(Stress)-Test Fairness Algorithms
Nil-Jana Akpinar
Manish Nagireddy
Logan Stapleton
H. Cheng
Haiyi Zhu
Steven Wu
Hoda Heidari
304
17
0
21 Apr 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
231
5
0
20 Apr 2022
Robustness and Usefulness in AI Explanation Methods
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
289
2
0
07 Mar 2022
Margin-distancing for safe model explanation
Margin-distancing for safe model explanationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Tom Yan
Chicheng Zhang
144
3
0
23 Feb 2022
Algorithmic audits of algorithms, and the law
Algorithmic audits of algorithms, and the lawAI and Ethics (AE), 2022
Erwan Le Merrer
Ronan Pons
Gilles Trédan
MLAUFaML
222
15
0
15 Feb 2022
12
Next
Page 1 of 2