ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.10295
  4. Cited By
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts

Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts

25 January 2022
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
    AILaw
ArXivPDFHTML

Papers citing "Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts"

13 / 13 papers shown
Title
DiCE-Extended: A Robust Approach to Counterfactual Explanations in Machine Learning
DiCE-Extended: A Robust Approach to Counterfactual Explanations in Machine Learning
Volkan Bakir
Polat Goktas
Sureyya Akyuz
43
0
0
26 Apr 2025
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
Chhavi Yadav
Evan Monroe Laufer
Dan Boneh
Kamalika Chaudhuri
80
0
0
06 Feb 2025
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
26
0
0
22 Sep 2024
Why You Should Not Trust Interpretations in Machine Learning:
  Adversarial Attacks on Partial Dependence Plots
Why You Should Not Trust Interpretations in Machine Learning: Adversarial Attacks on Partial Dependence Plots
Xi Xin
Giles Hooker
Fei Huang
AAML
38
6
0
29 Apr 2024
The Case Against Explainability
The Case Against Explainability
Hofit Wasserman Rozen
N. Elkin-Koren
Ran Gilad-Bachrach
AILaw
ELM
6
0
0
20 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
6
37
0
20 Apr 2023
Mind the Gap! Bridging Explainable Artificial Intelligence and Human
  Understanding with Luhmann's Functional Theory of Communication
Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication
B. Keenan
Kacper Sokol
6
7
0
07 Feb 2023
COmic: Convolutional Kernel Networks for Interpretable End-to-End
  Learning on (Multi-)Omics Data
COmic: Convolutional Kernel Networks for Interpretable End-to-End Learning on (Multi-)Omics Data
Jonas C. Ditz
Bernhard Reuter
Nícolas Pfeifer
16
1
0
02 Dec 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
42
18
0
31 May 2022
Unfooling Perturbation-Based Post Hoc Explainers
Unfooling Perturbation-Based Post Hoc Explainers
Zachariah Carmichael
Walter J. Scheirer
AAML
48
14
0
29 May 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
172
183
0
03 Feb 2022
Convolutional Motif Kernel Networks
Convolutional Motif Kernel Networks
Jonas C. Ditz
Bernhard Reuter
N. Pfeifer
FAtt
8
2
0
03 Nov 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
  Stakeholder Perspective on XAI and a Conceptual Model Guiding
  Interdisciplinary XAI Research
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
43
415
0
15 Feb 2021
1