ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.02407
  4. Cited By
Explaining Anomalies Detected by Autoencoders Using SHAP
v1v2 (latest)

Explaining Anomalies Detected by Autoencoders Using SHAP

6 March 2019
Liat Antwarg
Ronnie Mindlin Miller
Bracha Shapira
Lior Rokach
    FAttTDI
ArXiv (abs)PDFHTML

Papers citing "Explaining Anomalies Detected by Autoencoders Using SHAP"

32 / 32 papers shown
Title
An Unsupervised Deep XAI Framework for Localization of Concurrent Replay Attacks in Nuclear Reactor Signals
An Unsupervised Deep XAI Framework for Localization of Concurrent Replay Attacks in Nuclear Reactor Signals
Konstantinos Vasili
Zachery T. Dahm
William Richards
AAML
40
2
0
05 Aug 2025
Can I trust my anomaly detection system? A case study based on
  explainable AI
Can I trust my anomaly detection system? A case study based on explainable AI
Muhammad Rashid
E. Amparore
Enrico Ferrari
Damiano Verda
122
0
0
29 Jul 2024
Approximating the Core via Iterative Coalition Sampling
Approximating the Core via Iterative Coalition Sampling
I. Gemp
Marc Lanctot
Luke Marris
Yiran Mao
Edgar A. Duénez-Guzmán
...
Michael Kaisers
Daniel Hennes
Kalesha Bullard
Kate Larson
Yoram Bachrach
158
2
0
06 Feb 2024
Analyzing Key Users' behavior trends in Volunteer-Based Networks
Analyzing Key Users' behavior trends in Volunteer-Based Networks
Nofar Piterman
Tamar Makov
Michael Fire
148
0
0
04 Oct 2023
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection
  Model
Using Kernel SHAP XAI Method to optimize the Network Anomaly Detection Model
Khushnaseeb Roshan
Aasim Zafar
112
25
0
31 Jul 2023
Detection of Sensor-To-Sensor Variations using Explainable AI
Detection of Sensor-To-Sensor Variations using Explainable AI
Sarah Seifi
Sebastian A. Schober
Cecilia Carbonelli
Lorenzo Servadei
Robert Wille
86
0
0
19 Jun 2023
Unlocking Layer-wise Relevance Propagation for Autoencoders
Unlocking Layer-wise Relevance Propagation for Autoencoders
Kenyu Kobayashi
Renata Khasanova
Arno Schneuwly
Felix Schmidt
Matteo Casserini
FAtt
81
0
0
21 Mar 2023
Interpretable Ensembles of Hyper-Rectangles as Base Models
Interpretable Ensembles of Hyper-Rectangles as Base Models
A. Konstantinov
Lev V. Utkin
133
3
0
15 Mar 2023
Explainable Artificial Intelligence and Cybersecurity: A Systematic
  Literature Review
Explainable Artificial Intelligence and Cybersecurity: A Systematic Literature Review
C. Mendes
T. N. Rios
63
11
0
27 Feb 2023
A Survey on Explainable Anomaly Detection
A Survey on Explainable Anomaly Detection
Zhong Li
Yuxuan Zhu
M. Leeuwen
160
100
0
13 Oct 2022
Fine-grained Anomaly Detection in Sequential Data via Counterfactual
  Explanations
Fine-grained Anomaly Detection in Sequential Data via Counterfactual Explanations
He Cheng
Depeng Xu
Shuhan Yuan
Xintao Wu
AI4TS
95
3
0
09 Oct 2022
Explaining Anomalies using Denoising Autoencoders for Financial Tabular
  Data
Explaining Anomalies using Denoising Autoencoders for Financial Tabular Data
Timur Sattarov
Dayananda Herurkar
Jörn Hees
103
9
0
21 Sep 2022
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits
  by enhancing SHapley Additive exPlanations
RESHAPE: Explaining Accounting Anomalies in Financial Statement Audits by enhancing SHapley Additive exPlanations
Ricardo Müller
Marco Schreyer
Timur Sattarov
Damian Borth
AAMLMLAU
163
8
0
19 Sep 2022
Explanation Method for Anomaly Detection on Mixed Numerical and
  Categorical Spaces
Explanation Method for Anomaly Detection on Mixed Numerical and Categorical Spaces
Iñigo López-Riobóo Botana
Carlos Eiras-Franco
Julio César Hernández Castro
Amparo Alonso-Betanzos
210
0
0
09 Sep 2022
A general-purpose method for applying Explainable AI for Anomaly
  Detection
A general-purpose method for applying Explainable AI for Anomaly Detection
John Sipple
Abdou Youssef
127
18
0
23 Jul 2022
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current
  Methods, Challenges, and Opportunities
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
Subash Neupane
Jesse Ables
William Anderson
Sudip Mittal
Shahram Rahimi
I. Banicescu
Maria Seale
AAML
142
88
0
13 Jul 2022
Towards Responsible AI for Financial Transactions
Towards Responsible AI for Financial Transactions
Charl Maree
Jan Erik Modal
C. Omlin
AAML
135
19
0
06 Jun 2022
PIXAL: Anomaly Reasoning with Visual Analytics
PIXAL: Anomaly Reasoning with Visual Analytics
Brian Montambault
C. Brumar
M. Behrisch
Remco Chang
162
3
0
23 May 2022
Trustworthy Anomaly Detection: A Survey
Trustworthy Anomaly Detection: A Survey
Shuhan Yuan
Xintao Wu
FaML
169
9
0
15 Feb 2022
Utilizing XAI technique to improve autoencoder based model for computer
  network anomaly detection with shapley additive explanation(SHAP)
Utilizing XAI technique to improve autoencoder based model for computer network anomaly detection with shapley additive explanation(SHAP)
Khushnaseeb Roshan
Aasim Zafar
AAML
82
61
0
14 Dec 2021
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
  deep learning
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning
Bang Xiang Yong
Alexandra Brintrup
105
9
0
19 Oct 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly
  Detection in Security Applications
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
100
91
0
23 Sep 2021
An Imprecise SHAP as a Tool for Explaining the Class Probability
  Distributions under Limited Training Data
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data
Lev V. Utkin
A. Konstantinov
Kirill Vishniakov
FAtt
140
7
0
16 Jun 2021
Explainable Machine Learning for Fraud Detection
Explainable Machine Learning for Fraud Detection
I. Psychoula
A. Gutmann
Pradip Mainali
Sharon H. Lee
Paul Dunphy
F. Petitcolas
FaML
162
37
0
13 May 2021
Interpretation of multi-label classification models using shapley values
Interpretation of multi-label classification models using shapley values
Shikun Chen
FAttTDI
116
11
0
21 Apr 2021
A new interpretable unsupervised anomaly detection method based on
  residual explanation
A new interpretable unsupervised anomaly detection method based on residual explanation
David F. N. Oliveira
L. Vismari
A. M. Nascimento
J. R. de Almeida
P. Cugnasca
J. Camargo
L. Almeida
Rafael Gripp
Marcelo M. Neves
AAML
144
20
0
14 Mar 2021
Ensembles of Random SHAPs
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
80
21
0
04 Mar 2021
Does the dataset meet your expectations? Explaining sample
  representation in image data
Does the dataset meet your expectations? Explaining sample representation in image data
Dhasarathy Parthasarathy
Anton Johansson
55
0
0
06 Dec 2020
On the Nature and Types of Anomalies: A Review of Deviations in Data
On the Nature and Types of Anomalies: A Review of Deviations in Data
Ralph Foorthuis
257
97
0
30 Jul 2020
Opportunities and Challenges in Explainable Artificial Intelligence
  (XAI): A Survey
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
262
637
0
16 Jun 2020
Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly
  Detection
Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection
Naoya Takeishi
FAtt
187
35
0
08 Sep 2019
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable
  Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
VizADS-B: Analyzing Sequences of ADS-B Images Using Explainable Convolutional LSTM Encoder-Decoder to Detect Cyber Attacks
Sefi Akerman
Edan Habler
A. Shabtai
113
19
0
19 Jun 2019
1