ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.05386
  4. Cited By
Model-Agnostic Interpretability of Machine Learning

Model-Agnostic Interpretability of Machine Learning

16 June 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
    FAtt
    FaML
ArXivPDFHTML

Papers citing "Model-Agnostic Interpretability of Machine Learning"

50 / 88 papers shown
Title
neuralGAM: An R Package for Fitting Generalized Additive Neural Networks
neuralGAM: An R Package for Fitting Generalized Additive Neural Networks
Ines Ortega-Fernandez
Marta Sestelo
34
0
0
13 May 2025
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Integrating Explainable AI in Medical Devices: Technical, Clinical and Regulatory Insights and Recommendations
Dima Alattal
Asal Khoshravan Azar
P. Myles
Richard Branson
Hatim Abdulhussein
Allan Tucker
29
0
0
10 May 2025
Retrieval Augmented Generation Evaluation for Health Documents
Retrieval Augmented Generation Evaluation for Health Documents
Mario Ceresa
Lorenzo Bertolini
Valentin Comte
Nicholas Spadaro
Barbara Raffael
...
Sergio Consoli
Amalia Muñoz Piñeiro
Alex Patak
Maddalena Querci
Tobias Wiesenthal
RALM
3DV
39
0
1
07 May 2025
Diffusion Attribution Score: Evaluating Training Data Influence in Diffusion Models
Diffusion Attribution Score: Evaluating Training Data Influence in Diffusion Models
Jinxu Lin
Linwei Tao
Minjing Dong
Chang Xu
TDI
38
2
0
24 Oct 2024
Time Can Invalidate Algorithmic Recourse
Time Can Invalidate Algorithmic Recourse
Giovanni De Toni
Stefano Teso
Bruno Lepri
Andrea Passerini
37
0
0
10 Oct 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
84
1
0
09 Oct 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
67
28
0
30 Aug 2024
A prototype-based model for set classification
A prototype-based model for set classification
Mohammad Mohammadi
Sreejita Ghosh
VLM
106
1
0
25 Aug 2024
CHILLI: A data context-aware perturbation method for XAI
CHILLI: A data context-aware perturbation method for XAI
Saif Anwar
Nathan Griffiths
A. Bhalerao
T. Popham
35
0
0
10 Jul 2024
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Evaluating Human Alignment and Model Faithfulness of LLM Rationale
Mohsen Fayyaz
Fan Yin
Jiao Sun
Nanyun Peng
55
3
0
28 Jun 2024
CONFINE: Conformal Prediction for Interpretable Neural Networks
CONFINE: Conformal Prediction for Interpretable Neural Networks
Linhui Huang
S. Lala
N. Jha
68
2
0
01 Jun 2024
Explaining Predictions by Characteristic Rules
Explaining Predictions by Characteristic Rules
Amr Alkhatib
Henrik Bostrom
Michalis Vazirgiannis
16
5
0
31 May 2024
Model Interpretation and Explainability: Towards Creating Transparency
  in Prediction Models
Model Interpretation and Explainability: Towards Creating Transparency in Prediction Models
D. Kridel
Jacob Dineen
Daniel R. Dolk
David G. Castillo
21
4
0
31 May 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
59
2
0
25 Apr 2024
Segmentation, Classification and Interpretation of Breast Cancer Medical
  Images using Human-in-the-Loop Machine Learning
Segmentation, Classification and Interpretation of Breast Cancer Medical Images using Human-in-the-Loop Machine Learning
David Vázquez-Lema
E. Mosqueira-Rey
Elena Hernández-Pereira
Carlos Fernández-Lozano
Fernando Seara-Romera
Jorge Pombo-Otero
LM&MA
34
1
0
29 Mar 2024
Explainable Learning with Gaussian Processes
Explainable Learning with Gaussian Processes
Kurt Butler
Guanchao Feng
P. Djuric
31
1
0
11 Mar 2024
Succinct Interaction-Aware Explanations
Succinct Interaction-Aware Explanations
Sascha Xu
Joscha Cuppers
Jilles Vreeken
FAtt
13
0
0
08 Feb 2024
Improving the accuracy of freight mode choice models: A case study using
  the 2017 CFS PUF data set and ensemble learning techniques
Improving the accuracy of freight mode choice models: A case study using the 2017 CFS PUF data set and ensemble learning techniques
Diyi Liu
Hyeonsup Lim
M. Uddin
Yuandong Liu
Lee D. Han
Ho-Ling Hwang
Shih-Miao Chin
11
0
0
01 Feb 2024
Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity
  Analysis Methods for Time-Series Deep Learning Models
Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity Analysis Methods for Time-Series Deep Learning Models
Zhengguang Wang
9
0
0
29 Jan 2024
Is K-fold cross validation the best model selection method for Machine
  Learning?
Is K-fold cross validation the best model selection method for Machine Learning?
Juan M Gorriz
F. Segovia
J. Ramírez
A. Ortiz
J. Suckling
39
17
0
29 Jan 2024
Black-Box Access is Insufficient for Rigorous AI Audits
Black-Box Access is Insufficient for Rigorous AI Audits
Stephen Casper
Carson Ezell
Charlotte Siegmann
Noam Kolt
Taylor Lynn Curtis
...
Michael Gerovitch
David Bau
Max Tegmark
David M. Krueger
Dylan Hadfield-Menell
AAML
34
78
0
25 Jan 2024
Real-time Neural Network Inference on Extremely Weak Devices: Agile
  Offloading with Explainable AI
Real-time Neural Network Inference on Extremely Weak Devices: Agile Offloading with Explainable AI
Kai Huang
Wei Gao
15
35
0
21 Dec 2023
Toward enriched Cognitive Learning with XAI
Toward enriched Cognitive Learning with XAI
M. Nizami
Ulrike Kuhl
J. Alonso-Moral
Alessandro Bogliolo
24
1
0
19 Dec 2023
Towards Interpretable Classification of Leukocytes based on Deep
  Learning
Towards Interpretable Classification of Leukocytes based on Deep Learning
S. Röhrl
Johannes Groll
M. Lengl
Simon Schumann
C. Klenk
D. Heim
Martin Knopp
Oliver Hayden
Klaus Diepold
25
2
0
24 Nov 2023
Intriguing Properties of Data Attribution on Diffusion Models
Intriguing Properties of Data Attribution on Diffusion Models
Xiaosen Zheng
Tianyu Pang
Chao Du
Jing Jiang
Min-Bin Lin
TDI
34
20
1
01 Nov 2023
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak
  Supervision
XAI-CLASS: Explanation-Enhanced Text Classification with Extremely Weak Supervision
Daniel Hajialigol
Hanwen Liu
Xuan Wang
VLM
21
5
0
31 Oct 2023
Text2Topic: Multi-Label Text Classification System for Efficient Topic
  Detection in User Generated Content with Zero-Shot Capabilities
Text2Topic: Multi-Label Text Classification System for Efficient Topic Detection in User Generated Content with Zero-Shot Capabilities
Fengjun Wang
Moran Beladev
Ofri Kleinfeld
Elina Frayerman
Tal Shachar
Eran Fainman
Karen Lastmann Assaraf
Sarai Mizrachi
Benjamin Wang
VLM
10
8
0
23 Oct 2023
Making informed decisions in cutting tool maintenance in milling: A KNN-based model agnostic approach
Making informed decisions in cutting tool maintenance in milling: A KNN-based model agnostic approach
Aditya M. Rahalkar
Om M. Khare
A. Patange
Abhishek D. Patange
Rohan N. Soman
13
1
0
23 Oct 2023
Explainable Depression Symptom Detection in Social Media
Explainable Depression Symptom Detection in Social Media
Eliseo Bao Souto
Anxo Perez
Javier Parapar
27
5
0
20 Oct 2023
Natural Example-Based Explainability: a Survey
Natural Example-Based Explainability: a Survey
Antonin Poché
Lucas Hervier
M. Bakkay
XAI
26
11
0
05 Sep 2023
TRIVEA: Transparent Ranking Interpretation using Visual Explanation of
  Black-Box Algorithmic Rankers
TRIVEA: Transparent Ranking Interpretation using Visual Explanation of Black-Box Algorithmic Rankers
Jun Yuan
Kaustav Bhattacharjee
A. Islam
Aritra Dasgupta
18
2
0
28 Aug 2023
Software Doping Analysis for Human Oversight
Software Doping Analysis for Human Oversight
Sebastian Biewer
Kevin Baum
Sarah Sterz
Holger Hermanns
Sven Hetmank
Markus Langer
Anne Lauber-Rönsberg
Franz Lehr
25
4
0
11 Aug 2023
Designing Explainable Predictive Machine Learning Artifacts: Methodology
  and Practical Demonstration
Designing Explainable Predictive Machine Learning Artifacts: Methodology and Practical Demonstration
Giacomo Welsch
Peter Kowalczyk
25
1
0
20 Jun 2023
Explaining black box text modules in natural language with language
  models
Explaining black box text modules in natural language with language models
Chandan Singh
Aliyah R. Hsu
Richard Antonello
Shailee Jain
Alexander G. Huth
Bin-Xia Yu
Jianfeng Gao
MILM
26
46
0
17 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
19
37
0
20 Apr 2023
Multi-resolution Interpretation and Diagnostics Tool for Natural
  Language Classifiers
Multi-resolution Interpretation and Diagnostics Tool for Natural Language Classifiers
P. Jalali
Nengfeng Zhou
Yufei Yu
AAML
28
0
0
06 Mar 2023
Appropriate Reliance on AI Advice: Conceptualization and the Effect of
  Explanations
Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations
Max Schemmer
Niklas Kühl
Carina Benz
Andrea Bartos
G. Satzger
21
97
0
04 Feb 2023
Case-Base Neural Networks: survival analysis with time-varying,
  higher-order interactions
Case-Base Neural Networks: survival analysis with time-varying, higher-order interactions
Jesse Islam
M. Turgeon
R. Sladek
S. Bhatnagar
CML
21
0
0
16 Jan 2023
The State of the Art in Enhancing Trust in Machine Learning Models with
  the Use of Visualizations
The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations
Angelos Chatzimparmpas
R. Martins
I. Jusufi
K. Kucher
Fabrice Rossi
A. Kerren
FAtt
24
160
0
22 Dec 2022
(Psycho-)Linguistic Features Meet Transformer Models for Improved
  Explainable and Controllable Text Simplification
(Psycho-)Linguistic Features Meet Transformer Models for Improved Explainable and Controllable Text Simplification
Yu Qiao
Xiaofei Li
Daniel Wiechmann
E. Kerz
19
4
0
19 Dec 2022
"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of
  Counterfactual Explanations
"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of Counterfactual Explanations
André Artelt
Barbara Hammer
FaML
22
8
0
27 Nov 2022
A Detailed Study of Interpretability of Deep Neural Network based Top
  Taggers
A Detailed Study of Interpretability of Deep Neural Network based Top Taggers
Ayush Khot
Mark S. Neubauer
Avik Roy
AAML
33
16
0
09 Oct 2022
Greybox XAI: a Neural-Symbolic learning framework to produce
  interpretable predictions for image classification
Greybox XAI: a Neural-Symbolic learning framework to produce interpretable predictions for image classification
Adrien Bennetot
Gianni Franchi
Javier Del Ser
Raja Chatila
Natalia Díaz Rodríguez
AAML
27
29
0
26 Sep 2022
Making the black-box brighter: interpreting machine learning algorithm
  for forecasting drilling accidents
Making the black-box brighter: interpreting machine learning algorithm for forecasting drilling accidents
E. Gurina
Nikita Klyuchnikov
Ksenia Antipova
D. Koroteev
FAtt
25
8
0
06 Sep 2022
Data Science and Machine Learning in Education
Data Science and Machine Learning in Education
G. Benelli
Thomas Y. Chen
Javier Mauricio Duarte
Matthew Feickert
Matthew Graham
...
K. Terao
S. Thais
A. Roy
J. Vlimant
G. Chachamis
AI4CE
26
5
0
19 Jul 2022
How Platform-User Power Relations Shape Algorithmic Accountability: A
  Case Study of Instant Loan Platforms and Financially Stressed Users in India
How Platform-User Power Relations Shape Algorithmic Accountability: A Case Study of Instant Loan Platforms and Financially Stressed Users in India
Divya Ramesh
Vaishnav Kameswaran
Ding-wen Wang
Nithya Sambasivan
22
35
0
11 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
38
77
0
06 May 2022
Should I Follow AI-based Advice? Measuring Appropriate Reliance in
  Human-AI Decision-Making
Should I Follow AI-based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making
Max Schemmer
Patrick Hemmer
Niklas Kühl
Carina Benz
G. Satzger
12
56
0
14 Apr 2022
EEG based Emotion Recognition: A Tutorial and Review
EEG based Emotion Recognition: A Tutorial and Review
Xiang Li
Yazhou Zhang
Prayag Tiwari
D. Song
Bin Hu
Meihong Yang
Zhigang Zhao
Neeraj Kumar
Pekka Marttinen
23
248
0
16 Mar 2022
Counterfactual Explanations for Predictive Business Process Monitoring
Counterfactual Explanations for Predictive Business Process Monitoring
Tsung-Hao Huang
Andreas Metzger
Klaus Pohl
19
19
0
24 Feb 2022
12
Next