ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.07983
  4. Cited By
Explanations can be manipulated and geometry is to blame

Explanations can be manipulated and geometry is to blame

19 June 2019
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
    AAML
    FAtt
ArXivPDFHTML

Papers citing "Explanations can be manipulated and geometry is to blame"

50 / 77 papers shown
Title
Robustness questions the interpretability of graph neural networks: what to do?
Robustness questions the interpretability of graph neural networks: what to do?
Kirill Lukyanov
Georgii Sazonov
Serafim Boyarsky
Ilya Makarov
AAML
146
0
0
05 May 2025
DiCE-Extended: A Robust Approach to Counterfactual Explanations in Machine Learning
DiCE-Extended: A Robust Approach to Counterfactual Explanations in Machine Learning
Volkan Bakir
Polat Goktas
Sureyya Akyuz
50
0
0
26 Apr 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
100
1
0
13 Mar 2025
ExplainReduce: Summarising local explanations via proxies
ExplainReduce: Summarising local explanations via proxies
Lauri Seppäläinen
Mudong Guo
Kai Puolamäki
FAtt
49
0
0
17 Feb 2025
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
89
0
0
29 Nov 2024
Explainable AI needs formal notions of explanation correctness
Explainable AI needs formal notions of explanation correctness
Stefan Haufe
Rick Wilming
Benedict Clark
Rustam Zhumagambetov
Danny Panknin
Ahcène Boubekki
XAI
31
1
0
22 Sep 2024
Algebraic Adversarial Attacks on Integrated Gradients
Algebraic Adversarial Attacks on Integrated Gradients
Lachlan Simpson
Federico Costanza
Kyle Millar
A. Cheng
Cheng-Chew Lim
Hong-Gunn Chew
SILM
AAML
64
2
0
23 Jul 2024
Manifold Integrated Gradients: Riemannian Geometry for Feature
  Attribution
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution
Eslam Zaher
Maciej Trzaskowski
Quan Nguyen
Fred Roosta
AAML
24
4
0
16 May 2024
Stability of Explainable Recommendation
Stability of Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
38
1
0
03 May 2024
Robust Explainable Recommendation
Robust Explainable Recommendation
Sairamvinay Vijayaraghavan
Prasant Mohapatra
AAML
23
0
0
03 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
56
5
0
02 May 2024
Why You Should Not Trust Interpretations in Machine Learning:
  Adversarial Attacks on Partial Dependence Plots
Why You Should Not Trust Interpretations in Machine Learning: Adversarial Attacks on Partial Dependence Plots
Xi Xin
Giles Hooker
Fei Huang
AAML
38
6
0
29 Apr 2024
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Approximate Nullspace Augmented Finetuning for Robust Vision Transformers
Haoyang Liu
Aditya Singh
Yijiang Li
Haohan Wang
AAML
ViT
36
1
0
15 Mar 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
41
0
0
09 Mar 2024
Respect the model: Fine-grained and Robust Explanation with Sharing
  Ratio Decomposition
Respect the model: Fine-grained and Robust Explanation with Sharing Ratio Decomposition
Sangyu Han
Yearim Kim
Nojun Kwak
AAML
21
1
0
25 Jan 2024
Improving Interpretation Faithfulness for Vision Transformers
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
27
5
0
29 Nov 2023
Beyond XAI:Obstacles Towards Responsible AI
Beyond XAI:Obstacles Towards Responsible AI
Yulu Pi
34
2
0
07 Sep 2023
Can We Rely on AI?
Can We Rely on AI?
D. Higham
AAML
35
0
0
29 Aug 2023
Robust Ranking Explanations
Robust Ranking Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
FAtt
AAML
35
0
0
08 Jul 2023
DARE: Towards Robust Text Explanations in Biomedical and Healthcare
  Applications
DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications
Adam Ivankay
Mattia Rigotti
P. Frossard
OOD
MedIm
21
1
0
05 Jul 2023
Overcoming Adversarial Attacks for Human-in-the-Loop Applications
Overcoming Adversarial Attacks for Human-in-the-Loop Applications
Ryan McCoppin
Marla Kennedy
P. Lukyanenko
Sean M. Kennedy
AAML
18
1
0
09 Jun 2023
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Preemptively Pruning Clever-Hans Strategies in Deep Neural Networks
Lorenz Linhardt
Klaus-Robert Muller
G. Montavon
AAML
23
7
0
12 Apr 2023
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
  Moreau Envelope
MoreauGrad: Sparse and Robust Interpretation of Neural Networks via Moreau Envelope
Jingwei Zhang
Farzan Farnia
UQCV
19
3
0
08 Jan 2023
Valid P-Value for Deep Learning-Driven Salient Region
Valid P-Value for Deep Learning-Driven Salient Region
Daiki Miwa
Vo Nguyen Le Duy
I. Takeuchi
FAtt
AAML
24
14
0
06 Jan 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
43
17
0
30 Dec 2022
Robust Explanation Constraints for Neural Networks
Robust Explanation Constraints for Neural Networks
Matthew Wicker
Juyeon Heo
Luca Costabello
Adrian Weller
FAtt
21
17
0
16 Dec 2022
Understanding and Enhancing Robustness of Concept-based Models
Understanding and Enhancing Robustness of Concept-based Models
Sanchit Sinha
Mengdi Huai
Jianhui Sun
Aidong Zhang
AAML
25
18
0
29 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
25
5
0
29 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
27
18
0
27 Nov 2022
SEAT: Stable and Explainable Attention
SEAT: Stable and Explainable Attention
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
OOD
18
18
0
23 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
29
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for
  Image- and Video-Classification Models
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
Atsushi Kikuchi
Kotaro Uchida
Masaki Waga
Kohei Suenaga
FAtt
24
1
0
31 Oct 2022
Extremely Simple Activation Shaping for Out-of-Distribution Detection
Extremely Simple Activation Shaping for Out-of-Distribution Detection
Andrija Djurisic
Nebojsa Bozanic
Arjun Ashok
Rosanne Liu
OODD
169
150
0
20 Sep 2022
EMaP: Explainable AI with Manifold-based Perturbations
EMaP: Explainable AI with Manifold-based Perturbations
Minh Nhat Vu
Huy Mai
My T. Thai
AAML
35
2
0
18 Sep 2022
SoK: Explainable Machine Learning for Computer Security Applications
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
29
40
0
22 Aug 2022
Efficiently Training Low-Curvature Neural Networks
Efficiently Training Low-Curvature Neural Networks
Suraj Srinivas
Kyle Matoba
Himabindu Lakkaraju
F. Fleuret
AAML
23
15
0
14 Jun 2022
Fooling Explanations in Text Classifiers
Fooling Explanations in Text Classifiers
Adam Ivankay
Ivan Girardi
Chiara Marchiori
P. Frossard
AAML
22
20
0
07 Jun 2022
A Human-Centric Take on Model Monitoring
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
37
9
0
06 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
Sparse Visual Counterfactual Explanations in Image Space
Sparse Visual Counterfactual Explanations in Image Space
Valentyn Boreiko
Maximilian Augustin
Francesco Croce
Philipp Berens
Matthias Hein
BDL
CML
30
26
0
16 May 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
14
5
0
20 Apr 2022
Anti-Adversarially Manipulated Attributions for Weakly Supervised
  Semantic Segmentation and Object Localization
Anti-Adversarially Manipulated Attributions for Weakly Supervised Semantic Segmentation and Object Localization
Jungbeom Lee
Eunji Kim
J. Mok
Sung-Hoon Yoon
WSOL
40
29
0
11 Apr 2022
Robustness and Usefulness in AI Explanation Methods
Robustness and Usefulness in AI Explanation Methods
Erick Galinkin
FAtt
22
1
0
07 Mar 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
177
185
0
03 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
19
1
0
30 Jan 2022
Brain Structural Saliency Over The Ages
Brain Structural Saliency Over The Ages
Daniel Taylor
Jonathan Shock
Deshendran Moodley
J. Ipser
M. Treder
FAtt
9
1
0
12 Jan 2022
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
28
63
0
21 Dec 2021
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Xia Hu
AAML
39
11
0
08 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual
  Explanations
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
9
58
0
30 Oct 2021
12
Next