ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1606.03490
  4. Cited By
The Mythos of Model Interpretability
v1v2v3 (latest)

The Mythos of Model Interpretability

10 June 2016
Zachary Chase Lipton
    FaML
ArXiv (abs)PDFHTML

Papers citing "The Mythos of Model Interpretability"

50 / 1,204 papers shown
A Unified Framework for Evaluating and Enhancing the Transparency of Explainable AI Methods via Perturbation-Gradient Consensus Attribution
A Unified Framework for Evaluating and Enhancing the Transparency of Explainable AI Methods via Perturbation-Gradient Consensus Attribution
M. Islam
M. F. Mridha
Md Abrar Jahin
Nilanjan Dey
287
7
0
10 Apr 2026
MASE: Interpretable NLP Models via Model-Agnostic Saliency Estimation
MASE: Interpretable NLP Models via Model-Agnostic Saliency Estimation
Zhou Yang
Shunyan Luo
Jiazhen Zhu
Fang Jin
MILMFAtt
269
0
0
04 Dec 2025
MOTIF-RF: Multi-template On-chip Transformer Synthesis Incorporating Frequency-domain Self-transfer Learning for RFIC Design Automation
MOTIF-RF: Multi-template On-chip Transformer Synthesis Incorporating Frequency-domain Self-transfer Learning for RFIC Design Automation
Houbo He
Yizhou Xu
Lei Xia
Yaolong Hu
Fan Cai
Taiyun Chi
120
0
0
26 Nov 2025
Foundations of Artificial Intelligence Frameworks: Notion and Limits of AGI
Foundations of Artificial Intelligence Frameworks: Notion and Limits of AGI
Khanh Gia Bui
NAIAI4CE
417
0
0
23 Nov 2025
Bridging Philosophy and Machine Learning: A Structuralist Framework for Classifying Neural Network Representations
Bridging Philosophy and Machine Learning: A Structuralist Framework for Classifying Neural Network Representations
Yildiz Culcu
AI4CE
223
0
0
23 Nov 2025
Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations
Rethinking Saliency Maps: A Cognitive Human Aligned Taxonomy and Evaluation Framework for Explanations
Yehonatan Elisha
Seffi Cohen
Oren Barkan
Noam Koenigstein
FAtt
388
3
0
17 Nov 2025
Judging by the Rules: Compliance-Aligned Framework for Modern Slavery Statement Monitoring
Judging by the Rules: Compliance-Aligned Framework for Modern Slavery Statement Monitoring
Wenhao Xu
Akshatha Arodi
Jian-Yun Nie
Arsène Fansi Tchango
AILaw
340
0
0
11 Nov 2025
QiNN-QJ: A Quantum-inspired Neural Network with Quantum Jump for Multimodal Sentiment Analysis
QiNN-QJ: A Quantum-inspired Neural Network with Quantum Jump for Multimodal Sentiment Analysis
Yiwei Chen
Kehuan Yan
Yu Pan
D. Dong
160
0
0
31 Oct 2025
Machine learning approaches for interpretable antibody property prediction using structural data
Machine learning approaches for interpretable antibody property prediction using structural data
Kevin Michalewicz
Mauricio Barahona
Barbara Bravi
171
0
0
28 Oct 2025
Post-hoc Stochastic Concept Bottleneck Models
Post-hoc Stochastic Concept Bottleneck Models
Wiktor Jan Hoffmann
Sonia Laguna
Moritz Vandenhirtz
Emanuele Palumbo
Julia E. Vogt
179
1
0
09 Oct 2025
Cluster Paths: Navigating Interpretability in Neural Networks
Cluster Paths: Navigating Interpretability in Neural Networks
Nicholas M. Kroeger
Vincent Bindschaedler
186
0
0
08 Oct 2025
Semantic Regexes: Auto-Interpreting LLM Features with a Structured Language
Semantic Regexes: Auto-Interpreting LLM Features with a Structured Language
Angie Boggust
Donghao Ren
Yannick Assogba
Dominik Moritz
Arvind Satyanarayan
Fred Hohman
186
1
0
07 Oct 2025
Barbarians at the Gate: How AI is Upending Systems Research
Barbarians at the Gate: How AI is Upending Systems Research
Audrey Cheng
Zhifei Li
Melissa Z. Pan
Shu Liu
Bowen Wang
...
Aditya Desai
Jiarong Xing
Koushik Sen
Matei A. Zaharia
Ion Stoica
330
0
0
07 Oct 2025
(Sometimes) Less is More: Mitigating the Complexity of Rule-based Representation for Interpretable Classification
(Sometimes) Less is More: Mitigating the Complexity of Rule-based Representation for Interpretable Classification
Luca Bergamin
Roberto Confalonieri
F. Aiolli
119
0
0
26 Sep 2025
MindCraft: How Concept Trees Take Shape In Deep Models
MindCraft: How Concept Trees Take Shape In Deep Models
Bowei Tian
Yexiao He
Wanghao Ye
Ziyao Wang
Meng Liu
Ang Li
LRM
152
0
0
26 Sep 2025
LAVA: Explainability for Unsupervised Latent Embeddings
LAVA: Explainability for Unsupervised Latent Embeddings
Ivan Stresec
Joana P. Gonçalves
152
0
0
25 Sep 2025
Efficient & Correct Predictive Equivalence for Decision Trees
Efficient & Correct Predictive Equivalence for Decision Trees
Joao Marques-Silva
Alexey Ignatiev
402
1
0
22 Sep 2025
Towards a Transparent and Interpretable AI Model for Medical Image Classifications
Towards a Transparent and Interpretable AI Model for Medical Image ClassificationsCognitive Neurodynamics (Cogn Neurodyn), 2025
Binbin Wen
Yihang Wu
Tareef Daqqaq
Ahmad Chaddad
164
0
0
20 Sep 2025
Transparent and Fair Profiling in Employment Services: Evidence from Switzerland
Transparent and Fair Profiling in Employment Services: Evidence from Switzerland
Tim Räz
156
0
0
15 Sep 2025
Clarifying Model Transparency: Interpretability versus Explainability in Deep Learning with MNIST and IMDB Examples
Clarifying Model Transparency: Interpretability versus Explainability in Deep Learning with MNIST and IMDB Examples
Mitali Raj
110
0
0
13 Sep 2025
Interpretability as Alignment: Making Internal Understanding a Design Principle
Interpretability as Alignment: Making Internal Understanding a Design Principle
Aadit Sengupta
Pratinav Seth
Vinay Kumar Sankarapu
AI4CEAAML
231
0
0
10 Sep 2025
Explainability of CNN Based Classification Models for Acoustic Signal
Explainability of CNN Based Classification Models for Acoustic Signal
Zubair Faruqui
Mackenzie S. McIntire
Rahul Dubey
Jay McEntee
198
0
0
10 Sep 2025
Breaking SafetyCore: Exploring the Risks of On-Device AI Deployment
Breaking SafetyCore: Exploring the Risks of On-Device AI Deployment
Victor Guyomard
Mathis Mauvisseau
Marie Paindavoine
136
0
0
08 Sep 2025
From Eigenmodes to Proofs: Integrating Graph Spectral Operators with Symbolic Interpretable Reasoning
From Eigenmodes to Proofs: Integrating Graph Spectral Operators with Symbolic Interpretable Reasoning
Andrew Kiruluta
Priscilla Burity
121
0
0
07 Sep 2025
Fuzzy, Symbolic, and Contextual: Enhancing LLM Instruction via Cognitive Scaffolding
Fuzzy, Symbolic, and Contextual: Enhancing LLM Instruction via Cognitive Scaffolding
Vanessa Figueiredo
AI4CE
195
1
0
28 Aug 2025
Individualized and Interpretable Sleep Forecasting via a Two-Stage Adaptive Spatial-Temporal Model
Individualized and Interpretable Sleep Forecasting via a Two-Stage Adaptive Spatial-Temporal Model
Xueyi Wang
Elisabeth Wilhelm
Elisabeth Wilhelm
AI4TS
146
0
0
28 Aug 2025
Interestingness First Classifiers
Interestingness First Classifiers
Ryoma Sato
182
0
0
27 Aug 2025
Goal-Directedness is in the Eye of the Beholder
Goal-Directedness is in the Eye of the Beholder
Nina Rajcic
Anders Søgaard
115
1
0
18 Aug 2025
How can we trust opaque systems? Criteria for robust explanations in XAI
How can we trust opaque systems? Criteria for robust explanations in XAI
Florian J. Boge
Annika Schuster
AAML
176
0
0
18 Aug 2025
To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media Platforms
To Explain Or Not To Explain: An Empirical Investigation Of AI-Based Recommendations On Social Media PlatformsElectronic Markets (EM), 2024
Akm Bahalul Haque
A. Najmul Islam
Patrick Mikalef
134
13
0
13 Aug 2025
Toward using explainable data-driven surrogate models for treating performance-based seismic design as an inverse engineering problem
Toward using explainable data-driven surrogate models for treating performance-based seismic design as an inverse engineering problem
Mohsen Zaker Esteghamati
AI4CE
166
1
0
01 Aug 2025
Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models
Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models
Zhanna Kaufman
Madeline Endres
Cindy Xiong Bearfield
Yuriy Brun
231
3
0
31 Jul 2025
SIFOTL: A Principled, Statistically-Informed Fidelity-Optimization Method for Tabular Learning
SIFOTL: A Principled, Statistically-Informed Fidelity-Optimization Method for Tabular Learning
Shubham Mohole
Sainyam Galhotra
259
1
0
23 Jul 2025
Bhatt Conjectures: On Necessary-But-Not-Sufficient Benchmark Tautology for Human Like Reasoning
Bhatt Conjectures: On Necessary-But-Not-Sufficient Benchmark Tautology for Human Like Reasoning
Manish Bhatt
LRM
251
0
0
13 Jun 2025
Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit
Evaluating Sparse Autoencoders: From Shallow Design to Matching Pursuit
Valérie Costa
Thomas Fel
Ekdeep Singh Lubana
Bahareh Tolooshams
Demba Ba
389
0
0
05 Jun 2025
Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders
Towards Interpretability Without Sacrifice: Faithful Dense Layer Decomposition with Mixture of Decoders
James Oldfield
Shawn Im
Yixuan Li
M. Nicolaou
Ioannis Patras
Grigorios G. Chrysos
MoE
384
2
0
27 May 2025
UGCE: User-Guided Incremental Counterfactual Exploration
UGCE: User-Guided Incremental Counterfactual Exploration
Christos Fragkathoulas
E. Pitoura
187
1
0
27 May 2025
Explanation User Interfaces: A Systematic Literature Review
Explanation User Interfaces: A Systematic Literature Review
Eleonora Cappuccio
Andrea Esposito
Francesco Greco
Giuseppe Desolda
Rosa Lanzilotti
Salvatore Rinzivillo
XAI
329
1
0
26 May 2025
ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior
ExPLAIND: Unifying Model, Data, and Training Attribution to Study Model Behavior
Florian Eichin
Yupei Du
Philipp Mondorf
Maria Matveev
Barbara Plank
Michael A. Hedderich
FAtt
550
0
0
26 May 2025
SplitWise Regression: Stepwise Modeling with Adaptive Dummy Encoding
SplitWise Regression: Stepwise Modeling with Adaptive Dummy Encoding
Marcell T. Kurbucz
Nikolaos Tzivanakis
Nilufer Sari Aslam
Adam M. Sykulski
138
0
0
21 May 2025
The Evolution of Alpha in Finance Harnessing Human Insight and LLM Agents
The Evolution of Alpha in Finance Harnessing Human Insight and LLM Agents
Mohammad Rubyet Islam
AIFin
394
4
0
20 May 2025
BACON: A fully explainable AI model with graded logic for decision making problems
BACON: A fully explainable AI model with graded logic for decision making problems
Haishi Bai
Jozo Dujmovic
Jianwu Wang
459
0
0
20 May 2025
Explaining Neural Networks with Reasons
Explaining Neural Networks with Reasons
Levin Hornischer
Hannes Leitgeb
FAttAAMLMILM
463
0
0
20 May 2025
Two out of Three (ToT): using self-consistency to make robust predictions
Two out of Three (ToT): using self-consistency to make robust predictions
Jung Hoon Lee
Sujith Vijayan
OOD
302
0
0
19 May 2025
Growable and Interpretable Neural Control with Online Continual Learning for Autonomous Lifelong Locomotion Learning Machines
Growable and Interpretable Neural Control with Online Continual Learning for Autonomous Lifelong Locomotion Learning MachinesThe international journal of robotics research (IJRR), 2025
Arthicha Srisuchinnawong
Poramate Manoonpong
CLLLRM
372
3
0
17 May 2025
Evaluating Model Explanations without Ground Truth
Evaluating Model Explanations without Ground TruthConference on Fairness, Accountability and Transparency (FAccT), 2025
Kaivalya Rawal
Zihao Fu
Eoin Delaney
Chris Russell
FAttXAI
362
5
0
15 May 2025
Interpretable Risk Mitigation in LLM Agent Systems
Interpretable Risk Mitigation in LLM Agent Systems
Jan Chojnacki
LLMAG
506
4
0
15 May 2025
Towards Requirements Engineering for RAG Systems
Towards Requirements Engineering for RAG Systems
Tor Sporsem
Rasmus Ulfsnes
227
1
0
12 May 2025
Sparse Latent Factor Forecaster (SLFF) with Iterative Inference for Transparent Multi-Horizon Commodity Futures Prediction
Sparse Latent Factor Forecaster (SLFF) with Iterative Inference for Transparent Multi-Horizon Commodity Futures Prediction
Abhijit Gupta
480
0
0
11 May 2025
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
Moritz Vandenhirtz
Julia E. Vogt
546
1
0
09 May 2025
1234...232425
Next
Page 1 of 25
Pageof 25