ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.00033
  4. Cited By
Techniques for Interpretable Machine Learning
v1v2v3 (latest)

Techniques for Interpretable Machine Learning

31 July 2018
Mengnan Du
Ninghao Liu
Helen Zhou
    FaML
ArXiv (abs)PDFHTML

Papers citing "Techniques for Interpretable Machine Learning"

50 / 310 papers shown
A Formal Language Approach to Explaining RNNs
A Formal Language Approach to Explaining RNNs
Bishwamittra Ghosh
Daniel Neider
146
1
0
12 Jun 2020
Adversarial Infidelity Learning for Model Interpretation
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Haiwei Yang
AAML
309
20
0
09 Jun 2020
A Semiparametric Approach to Interpretable Machine Learning
A Semiparametric Approach to Interpretable Machine Learning
Numair Sani
Jaron J. R. Lee
Razieh Nabi
I. Shpitser
91
6
0
08 Jun 2020
Location, location, location: Satellite image-based real-estate
  appraisal
Location, location, location: Satellite image-based real-estate appraisal
Jan-Peter Kucklick
Oliver Müller
89
5
0
04 Jun 2020
MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model
  Explanations
MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model ExplanationsInternational Conference on Pattern Recognition (ICPR), 2020
Qing Yang
Xia Zhu
Jong-Kae Fwu
Yun Ye
Ganmei You
Yuan Zhu
AAML
251
26
0
04 Jun 2020
Local Interpretability of Calibrated Prediction Models: A Case of Type 2
  Diabetes Mellitus Screening Test
Local Interpretability of Calibrated Prediction Models: A Case of Type 2 Diabetes Mellitus Screening Test
Simon Kocbek
Primož Kocbek
Leona Cilar
Gregor Stiglic
104
2
0
02 Jun 2020
A Performance-Explainability Framework to Benchmark Machine Learning
  Methods: Application to Multivariate Time Series Classifiers
A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel
Véronique Masson
Elisa Fromont
AI4TS
272
21
0
29 May 2020
Joint learning of interpretation and distillation
Joint learning of interpretation and distillation
Jinchao Huang
Guofu Li
Zhicong Yan
Fucai Luo
Shenghong Li
FedMLFAtt
90
1
0
24 May 2020
iCapsNets: Towards Interpretable Capsule Networks for Text
  Classification
iCapsNets: Towards Interpretable Capsule Networks for Text Classification
Zhengyang Wang
Helen Zhou
Shuiwang Ji
224
12
0
16 May 2020
Explainable Reinforcement Learning: A Survey
Explainable Reinforcement Learning: A Survey
Erika Puiutta
Eric M. S. P. Veith
XAI
371
284
0
13 May 2020
Explainable Matrix -- Visualization for Global and Local
  Interpretability of Random Forest Classification Ensembles
Explainable Matrix -- Visualization for Global and Local Interpretability of Random Forest Classification Ensembles
Mário Popolin Neto
F. Paulovich
FAtt
265
100
0
08 May 2020
XEM: An Explainable-by-Design Ensemble Method for Multivariate Time
  Series Classification
XEM: An Explainable-by-Design Ensemble Method for Multivariate Time Series Classification
Kevin Fauvel
Elisa Fromont
Véronique Masson
P. Faverdin
Alexandre Termier
AI4TS
361
52
0
07 May 2020
Interpretable Learning-to-Rank with Generalized Additive Models
Interpretable Learning-to-Rank with Generalized Additive Models
Honglei Zhuang
Xuanhui Wang
Michael Bendersky
Alexander Grushetsky
Yonghui Wu
Petr Mitrichev
Ethan Sterling
Nathan Bell
Walker Ravina
Hai Qian
AI4CEFAtt
227
13
0
06 May 2020
Heuristic-Based Weak Learning for Automated Decision-Making
Heuristic-Based Weak Learning for Automated Decision-Making
Ryan Steed
Benjamin Williams
HAI
194
0
0
05 May 2020
Don't Explain without Verifying Veracity: An Evaluation of Explainable
  AI with Video Activity Recognition
Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition
Mahsan Nourani
Chiradeep Roy
Tahrima Rahman
Eric D. Ragan
Nicholas Ruozzi
Vibhav Gogate
AAML
193
20
0
05 May 2020
A robust algorithm for explaining unreliable machine learning survival
  models using the Kolmogorov-Smirnov bounds
A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov boundsNeural Networks (NN), 2020
M. Kovalev
Lev V. Utkin
AAML
235
32
0
05 May 2020
SurvLIME-Inf: A simplified modification of SurvLIME for explanation of
  machine learning survival models
SurvLIME-Inf: A simplified modification of SurvLIME for explanation of machine learning survival models
Lev V. Utkin
M. Kovalev
E. Kasimov
216
11
0
05 May 2020
Post-hoc explanation of black-box classifiers using confident itemsets
Post-hoc explanation of black-box classifiers using confident itemsetsExpert systems with applications (ESWA), 2020
M. Moradi
Matthias Samwald
326
113
0
05 May 2020
DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim
  Verification
DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim VerificationAnnual Meeting of the Association for Computational Linguistics (ACL), 2020
Lianwei Wu
Y. Rao
Yongqiang Zhao
Hao Liang
Ambreen Nazir
165
66
0
28 Apr 2020
Sequential Interpretability: Methods, Applications, and Future Direction
  for Understanding Deep Learning Models in the Context of Sequential Data
Sequential Interpretability: Methods, Applications, and Future Direction for Understanding Deep Learning Models in the Context of Sequential Data
B. Shickel
Parisa Rashidi
AI4TS
203
22
0
27 Apr 2020
Rigorous Explanation of Inference on Probabilistic Graphical Models
Rigorous Explanation of Inference on Probabilistic Graphical Models
Yifei Liu
Chao Chen
Xi Zhang
Sihong Xie
TPMFAtt
109
0
0
21 Apr 2020
Manipulation-Proof Machine Learning
Manipulation-Proof Machine Learning
Daniel Björkegren
J. Blumenstock
Samsun Knight
182
42
0
08 Apr 2020
SurvLIME: A method for explaining machine learning survival models
SurvLIME: A method for explaining machine learning survival modelsKnowledge-Based Systems (KBS), 2020
M. Kovalev
Lev V. Utkin
E. Kasimov
443
102
0
18 Mar 2020
GAMI-Net: An Explainable Neural Network based on Generalized Additive
  Models with Structured Interactions
GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured InteractionsPattern Recognition (Pattern Recognit.), 2020
Zebin Yang
Aijun Zhang
Agus Sudjianto
FAtt
324
153
0
16 Mar 2020
Causal Interpretability for Machine Learning -- Problems, Methods and
  Evaluation
Causal Interpretability for Machine Learning -- Problems, Methods and EvaluationSIGKDD Explorations (SIGKDD Explor.), 2020
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CMLELMXAI
279
241
0
09 Mar 2020
Interpretability of machine learning based prediction models in
  healthcare
Interpretability of machine learning based prediction models in healthcare
Gregor Stiglic
Primož Kocbek
Nino Fijačko
Marinka Zitnik
K. Verbert
Leona Cilar
AI4CE
308
472
0
20 Feb 2020
Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy
Human-Centered Artificial Intelligence: Reliable, Safe & TrustworthyInternational journal of human computer interactions (IJHCI), 2020
B. Shneiderman
219
941
0
10 Feb 2020
Making Logic Learnable With Neural Networks
Making Logic Learnable With Neural Networks
Tobias Brudermueller
Dennis L. Shung
A. Stanley
Johannes Stegmaier
Smita Krishnaswamy
NAI
215
3
0
10 Feb 2020
Explaining with Counter Visual Attributes and Examples
Explaining with Counter Visual Attributes and ExamplesInternational Conference on Multimedia Retrieval (ICMR), 2020
Sadaf Gulshad
A. Smeulders
XAIFAttAAML
153
15
0
27 Jan 2020
AI-Powered GUI Attack and Its Defensive Methods
AI-Powered GUI Attack and Its Defensive MethodsACM Southeast Regional Conference (ACMSE), 2020
Ning Yu
Zachary Tuttle
C. J. Thurnau
Emmanuel Mireku
AAML
126
10
0
26 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A SurveyIEEE Transactions on Radiation and Plasma Medical Sciences (TRPMS), 2020
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
479
376
0
08 Jan 2020
Attributional Robustness Training using Input-Gradient Spatial Alignment
Attributional Robustness Training using Input-Gradient Spatial Alignment
M. Singh
Nupur Kumari
Puneet Mangla
Abhishek Sinha
V. Balasubramanian
Balaji Krishnamurthy
OOD
399
10
0
29 Nov 2019
LionForests: Local Interpretation of Random Forests
LionForests: Local Interpretation of Random Forests
Ioannis Mollas
Nick Bassiliades
I. Vlahavas
Grigorios Tsoumakas
449
14
0
20 Nov 2019
A Decision-Theoretic Approach for Model Interpretability in Bayesian
  Framework
A Decision-Theoretic Approach for Model Interpretability in Bayesian Framework
Homayun Afrabandpey
Tomi Peltola
Juho Piironen
Aki Vehtari
Samuel Kaski
215
3
0
21 Oct 2019
Understanding Misclassifications by Attributes
Understanding Misclassifications by Attributes
Sadaf Gulshad
Zeynep Akata
J. H. Metzen
A. Smeulders
AAML
172
0
0
15 Oct 2019
Interpreting Deep Learning-Based Networking Systems
Interpreting Deep Learning-Based Networking Systems
Zili Meng
Minhu Wang
Jia-Ju Bai
Mingwei Xu
Hongzi Mao
Hongxin Hu
AI4CE
179
3
0
09 Oct 2019
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural
  Networks
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Mehdi Neshat
Zifan Wang
Bradley Alexander
Fan Yang
Zijian Zhang
Sirui Ding
Markus Wagner
Helen Zhou
FAtt
411
1,305
0
03 Oct 2019
Towards Generalizable Deepfake Detection with Locality-aware AutoEncoder
Towards Generalizable Deepfake Detection with Locality-aware AutoEncoder
Mengnan Du
Shiva K. Pentyala
Yuening Li
Helen Zhou
246
41
0
13 Sep 2019
FDA: Feature Disruptive Attack
FDA: Feature Disruptive AttackIEEE International Conference on Computer Vision (ICCV), 2019
Aditya Ganeshan
S. VivekB.
R. Venkatesh Babu
AAML
274
131
0
10 Sep 2019
Fairness in Deep Learning: A Computational Perspective
Fairness in Deep Learning: A Computational PerspectiveIEEE Intelligent Systems (IEEE Intell. Syst.), 2019
Mengnan Du
Fan Yang
Na Zou
Helen Zhou
FaMLFedML
214
256
0
23 Aug 2019
Learning Credible Deep Neural Networks with Rationale Regularization
Learning Credible Deep Neural Networks with Rationale RegularizationIndustrial Conference on Data Mining (IDM), 2019
Mengnan Du
Ninghao Liu
Fan Yang
Helen Zhou
FaML
280
47
0
13 Aug 2019
Interpretable and Fine-Grained Visual Explanations for Convolutional
  Neural Networks
Interpretable and Fine-Grained Visual Explanations for Convolutional Neural NetworksComputer Vision and Pattern Recognition (CVPR), 2019
Jörg Wagner
Jan M. Köhler
Tobias Gindele
Leon Hetzel
Thaddäus Wiedemer
Sven Behnke
AAMLFAtt
1.1K
127
0
07 Aug 2019
Techniques for Automated Machine Learning
Techniques for Automated Machine LearningSIGKDD Explorations (SIGKDD Explor.), 2019
Yi-Wei Chen
Qingquan Song
Helen Zhou
157
61
0
21 Jul 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Helen Zhou
XAIELM
214
77
0
16 Jul 2019
The Mass, Fake News, and Cognition Security
The Mass, Fake News, and Cognition Security
Bin Guo
Yasan Ding
Yueheng Sun
Shuai Ma
Ke Li
162
27
0
09 Jul 2019
Interpretable Feature Learning in Multivariate Big Data Analysis for
  Network Monitoring
Interpretable Feature Learning in Multivariate Big Data Analysis for Network MonitoringIEEE Transactions on Network and Service Management (TNSM), 2019
J. Camacho
K. Wasielewska
R. Bro
D. Kotz
236
5
0
05 Jul 2019
Exact and Consistent Interpretation of Piecewise Linear Models Hidden
  behind APIs: A Closed Form Solution
Exact and Consistent Interpretation of Piecewise Linear Models Hidden behind APIs: A Closed Form SolutionIEEE International Conference on Data Engineering (ICDE), 2019
Zicun Cong
Lingyang Chu
Lanjun Wang
X. Hu
Jian Pei
821
5
0
17 Jun 2019
Deep Learning for Spatio-Temporal Data Mining: A Survey
Deep Learning for Spatio-Temporal Data Mining: A SurveyIEEE Transactions on Knowledge and Data Engineering (TKDE), 2019
Senzhang Wang
Jiannong Cao
Philip S. Yu
AI4TS
294
717
0
11 Jun 2019
Is a Single Vector Enough? Exploring Node Polysemy for Network Embedding
Is a Single Vector Enough? Exploring Node Polysemy for Network EmbeddingKnowledge Discovery and Data Mining (KDD), 2019
Ninghao Liu
Qiaoyu Tan
Yuening Li
Hongxia Yang
Jingren Zhou
Helen Zhou
162
88
0
25 May 2019
Consensus-based Interpretable Deep Neural Networks with Application to
  Mortality Prediction
Consensus-based Interpretable Deep Neural Networks with Application to Mortality Prediction
Shaeke Salman
S. N. Payrovnaziri
Xiuwen Liu
Pablo Rengifo-Moreno
Zhe He
125
0
0
14 May 2019
Previous
1234567
Next
Page 6 of 7