ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.11117
  4. Cited By
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest
  Feature Importance

Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance

21 July 2020
Mattia Carletti
M. Terzi
Gian Antonio Susto
ArXivPDFHTML

Papers citing "Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance"

11 / 11 papers shown
Title
Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest
Extending Decision Predicate Graphs for Comprehensive Explanation of Isolation Forest
Matteo Ceschin
Leonardo Arrighi
Luca Longo
Sylvio Barbon Junior
19
0
0
06 May 2025
DTOR: Decision Tree Outlier Regressor to explain anomalies
DTOR: Decision Tree Outlier Regressor to explain anomalies
Riccardo Crupi
D. Regoli
Alessandro Sabatino
Immacolata Marano
Massimiliano Brinis
Luca Albertazzi
Andrea Cirillo
A. Cosentini
21
1
0
16 Mar 2024
AcME-AD: Accelerated Model Explanations for Anomaly Detection
AcME-AD: Accelerated Model Explanations for Anomaly Detection
Valentina Zaccaria
David Dandolo
Chiara Masiero
Gian Antonio Susto
35
2
0
02 Mar 2024
Explainable Predictive Maintenance: A Survey of Current Methods,
  Challenges and Opportunities
Explainable Predictive Maintenance: A Survey of Current Methods, Challenges and Opportunities
Logan Cummins
Alexander Sommers
Somayeh Bakhtiari Ramezani
Sudip Mittal
Joseph E. Jabour
Maria Seale
Shahram Rahimi
32
21
0
15 Jan 2024
Anomaly component analysis
Anomaly component analysis
Romain Valla
Pavlo Mozharovskyi
Florence dÁlché-Buc
19
0
0
26 Dec 2023
Transparent Anomaly Detection via Concept-based Explanations
Transparent Anomaly Detection via Concept-based Explanations
Laya Rafiee Sevyeri
Ivaxi Sheth
Farhood Farahnak
Samira Ebrahimi Kahou
S. Enger
19
4
0
16 Oct 2023
A Survey on Explainable Artificial Intelligence for Cybersecurity
A Survey on Explainable Artificial Intelligence for Cybersecurity
Gaith Rjoub
Jamal Bentahar
Omar Abdel Wahab
R. Mizouni
Alyssa Song
Robin Cohen
Hadi Otrok
Azzam Mourad
13
27
0
07 Mar 2023
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the
  Machine-Learning Black Box
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box
David Dandolo
Chiara Masiero
Mattia Carletti
Davide Dalle Pezze
Gian Antonio Susto
FAtt
LRM
24
22
0
23 Dec 2021
Why Are You Weird? Infusing Interpretability in Isolation Forest for
  Anomaly Detection
Why Are You Weird? Infusing Interpretability in Isolation Forest for Anomaly Detection
Nirmal Sobha Kartha
Clément Gautrais
Vincent Vercruyssen
11
6
0
13 Dec 2021
An Explainable Artificial Intelligence Approach for Unsupervised Fault
  Detection and Diagnosis in Rotating Machinery
An Explainable Artificial Intelligence Approach for Unsupervised Fault Detection and Diagnosis in Rotating Machinery
L. Brito
Gian Antonio Susto
J. N. Brito
M. Duarte
26
180
0
23 Feb 2021
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
1