ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.09773
  4. Cited By
Interpretability via Model Extraction
v1v2v3v4 (latest)

Interpretability via Model Extraction

29 June 2017
Osbert Bastani
Carolyn Kim
Hamsa Bastani
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Interpretability via Model Extraction"

18 / 68 papers shown
Title
MARLeME: A Multi-Agent Reinforcement Learning Model Extraction Library
MARLeME: A Multi-Agent Reinforcement Learning Model Extraction LibraryIEEE International Joint Conference on Neural Network (IJCNN), 2020
Dmitry Kazhdan
Z. Shams
Pietro Lio
94
17
0
16 Apr 2020
Born-Again Tree Ensembles
Born-Again Tree EnsemblesInternational Conference on Machine Learning (ICML), 2020
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
235
59
0
24 Mar 2020
Interpretability of Blackbox Machine Learning Models through Dataview
  Extraction and Shadow Model creation
Interpretability of Blackbox Machine Learning Models through Dataview Extraction and Shadow Model creation
Rupam Patir
Shubham Singhal
C. Anantaram
Vikram Goyal
98
0
0
02 Feb 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A SurveyIEEE Transactions on Radiation and Plasma Medical Sciences (TRPMS), 2020
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
381
371
0
08 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User ExperiencesInternational Conference on Human Factors in Computing Systems (CHI), 2020
Q. V. Liao
D. Gruen
Sarah Miller
484
808
0
08 Jan 2020
"How do I fool you?": Manipulating User Trust via Misleading Black Box
  Explanations
"How do I fool you?": Manipulating User Trust via Misleading Black Box ExplanationsAAAI/ACM Conference on AI, Ethics, and Society (AIES), 2019
Himabindu Lakkaraju
Osbert Bastani
188
273
0
15 Nov 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AIInformation Fusion (Inf. Fusion), 2019
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
782
7,413
0
22 Oct 2019
Testing and verification of neural-network-based safety-critical control
  software: A systematic literature review
Testing and verification of neural-network-based safety-critical control software: A systematic literature reviewInformation and Software Technology (IST), 2019
Jin Zhang
Jingyue Li
211
53
0
05 Oct 2019
Human-grounded Evaluations of Explanation Methods for Text
  Classification
Human-grounded Evaluations of Explanation Methods for Text ClassificationConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Piyawat Lertvittayakumjorn
Francesca Toni
FAtt
174
70
0
29 Aug 2019
A framework for the extraction of Deep Neural Networks by leveraging
  public data
A framework for the extraction of Deep Neural Networks by leveraging public data
Soham Pal
Yash Gupta
Aditya Shukla
Aditya Kanade
S. Shevade
V. Ganapathy
FedMLMLAUMIACV
144
61
0
22 May 2019
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
194
184
0
03 Dec 2018
Explaining Explanations in AI
Explaining Explanations in AI
Brent Mittelstadt
Chris Russell
Sandra Wachter
XAI
313
716
0
04 Nov 2018
A Gradient-Based Split Criterion for Highly Accurate and Transparent
  Model Trees
A Gradient-Based Split Criterion for Highly Accurate and Transparent Model TreesInternational Joint Conference on Artificial Intelligence (IJCAI), 2018
Klaus Broelemann
Gjergji Kasneci
161
20
0
25 Sep 2018
Techniques for Interpretable Machine Learning
Techniques for Interpretable Machine Learning
Mengnan Du
Ninghao Liu
Helen Zhou
FaML
403
1,188
0
31 Jul 2018
Verifiable Reinforcement Learning via Policy Extraction
Verifiable Reinforcement Learning via Policy Extraction
Osbert Bastani
Yewen Pu
Armando Solar-Lezama
OffRL
295
370
0
22 May 2018
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
Interpreting Deep Classifier by Visual Distillation of Dark Knowledge
Kai Xu
Dae Hoon Park
Chang Yi
Charles Sutton
HAIFAtt
121
29
0
11 Mar 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic
  Corrections
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
185
64
0
21 Feb 2018
Interpreting Tree Ensembles with inTrees
Interpreting Tree Ensembles with inTreesInternational Journal of Data Science and Analysis (JDSA), 2014
Houtao Deng
270
259
0
23 Aug 2014
Previous
12