ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXivPDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 1,822 papers shown
Title
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Unifying Image Counterfactuals and Feature Attributions with Latent-Space Adversarial Attacks
Jeremy Goldwasser
Giles Hooker
AAML
48
0
0
21 Apr 2025
Surrogate Fitness Metrics for Interpretable Reinforcement Learning
Surrogate Fitness Metrics for Interpretable Reinforcement Learning
Philipp Altmann
Céline Davignon
Maximilian Zorn
Fabian Ritz
Claudia Linnhoff-Popien
Thomas Gabor
44
0
0
20 Apr 2025
Mathematical Programming Models for Exact and Interpretable Formulation of Neural Networks
Mathematical Programming Models for Exact and Interpretable Formulation of Neural Networks
Masoud Ataei
Edrin Hasaj
Jacob Gipp
Sepideh Forouzi
37
0
0
19 Apr 2025
Leakage and Interpretability in Concept-Based Models
Leakage and Interpretability in Concept-Based Models
Enrico Parisini
Tapabrata Chakraborti
Chris Harbron
Ben D. MacArthur
Christopher R. S. Banerji
67
0
0
18 Apr 2025
Long-context Non-factoid Question Answering in Indic Languages
Long-context Non-factoid Question Answering in Indic Languages
Ritwik Mishra
R. Shah
Ponnurangam Kumaraguru
58
0
0
18 Apr 2025
Decoding Vision Transformers: the Diffusion Steering Lens
Decoding Vision Transformers: the Diffusion Steering Lens
Ryota Takatsuki
Sonia Joseph
Ippei Fujisawa
Ryota Kanai
DiffM
59
0
0
18 Apr 2025
Probabilistic Stability Guarantees for Feature Attributions
Probabilistic Stability Guarantees for Feature Attributions
Helen Jin
Anton Xue
Weiqiu You
Surbhi Goel
Eric Wong
56
0
0
18 Apr 2025
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Why and How LLMs Hallucinate: Connecting the Dots with Subsequence Associations
Yiyou Sun
Y. Gai
Lijie Chen
Abhilasha Ravichander
Yejin Choi
D. Song
HILM
73
1
0
17 Apr 2025
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
PCBEAR: Pose Concept Bottleneck for Explainable Action Recognition
Jongseo Lee
Wooil Lee
Gyeong-Moon Park
Seong Tae Kim
Jinwoo Choi
88
0
0
17 Apr 2025
Representation Learning for Tabular Data: A Comprehensive Survey
Representation Learning for Tabular Data: A Comprehensive Survey
Jun-Peng Jiang
Si-Yang Liu
Hao-Run Cai
Qile Zhou
Han-Jia Ye
LMTD
97
2
0
17 Apr 2025
Can Moran Eigenvectors Improve Machine Learning of Spatial Data? Insights from Synthetic Data Validation
Can Moran Eigenvectors Improve Machine Learning of Spatial Data? Insights from Synthetic Data Validation
Ziqi Li
Zhan Peng
52
0
0
16 Apr 2025
Don't Just Translate, Agitate: Using Large Language Models as Devil's Advocates for AI Explanations
Don't Just Translate, Agitate: Using Large Language Models as Devil's Advocates for AI Explanations
Ashley Suh
Kenneth Alperin
Harry Li
Steven R. Gomez
51
0
0
16 Apr 2025
Decision-based AI Visual Navigation for Cardiac Ultrasounds
Decision-based AI Visual Navigation for Cardiac Ultrasounds
Andy Dimnaku
Dominic Yurk
Zhiyuan Gao
Arun Padmanabhan
Mandar Aras
Yaser Abu-Mostafa
49
0
0
16 Apr 2025
Towards Explainable Fusion and Balanced Learning in Multimodal Sentiment Analysis
Towards Explainable Fusion and Balanced Learning in Multimodal Sentiment Analysis
Miaosen Luo
Yuncheng Jiang
Sijie Mai
66
0
0
16 Apr 2025
C-SHAP for time series: An approach to high-level temporal explanations
C-SHAP for time series: An approach to high-level temporal explanations
Annemarie Jutte
Faizan Ahmed
Jeroen Linssen
Maurice van Keulen
AI4TS
49
0
0
15 Apr 2025
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Shubham Kumar
Dwip Dalal
Narendra Ahuja
59
0
0
15 Apr 2025
Large Language Model-Informed Feature Discovery Improves Prediction and Interpretation of Credibility Perceptions of Visual Content
Large Language Model-Informed Feature Discovery Improves Prediction and Interpretation of Credibility Perceptions of Visual Content
Yilang Peng
Sijia Qian
Yingdan Lu
Cuihua Shen
44
0
0
15 Apr 2025
Challenges in interpretability of additive models
Challenges in interpretability of additive models
Xinyu Zhang
Julien Martinelli
S. T. John
AAML
AI4CE
86
1
0
14 Apr 2025
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
GlyTwin: Digital Twin for Glucose Control in Type 1 Diabetes Through Optimal Behavioral Modifications Using Patient-Centric Counterfactuals
Asiful Arefeen
Saman Khamesian
Maria Adela Grando
Bithika Thompson
Hassan Ghasemzadeh
57
0
0
14 Apr 2025
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
125
1
0
14 Apr 2025
Explainable Artificial Intelligence techniques for interpretation of food datasets: a review
Explainable Artificial Intelligence techniques for interpretation of food datasets: a review
Leonardo Arrighi
Ingrid Alves de Moraes
Marco Zullich
Michele Simonato
Douglas Fernandes Barbin
Sylvio Barbon Junior
50
0
0
12 Apr 2025
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Zhen Tan
Song Wang
Yifan Li
Yu Kong
Jundong Li
Tianlong Chen
Huan Liu
FAtt
62
0
0
11 Apr 2025
Hallucination, reliability, and the role of generative AI in science
Hallucination, reliability, and the role of generative AI in science
Charles Rathkopf
HILM
63
0
0
11 Apr 2025
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
A constraints-based approach to fully interpretable neural networks for detecting learner behaviors
Juan D. Pinto
Luc Paquette
66
0
0
10 Apr 2025
Adaptive Shrinkage Estimation For Personalized Deep Kernel Regression In Modeling Brain Trajectories
Adaptive Shrinkage Estimation For Personalized Deep Kernel Regression In Modeling Brain Trajectories
Vasiliki Tassopoulou
H. Shou
Christos Davatzikos
30
0
0
10 Apr 2025
A Meaningful Perturbation Metric for Evaluating Explainability Methods
A Meaningful Perturbation Metric for Evaluating Explainability Methods
Danielle Cohen
Hila Chefer
Lior Wolf
AAML
38
0
0
09 Apr 2025
Hyperparameter Optimisation with Practical Interpretability and Explanation Methods in Probabilistic Curriculum Learning
Hyperparameter Optimisation with Practical Interpretability and Explanation Methods in Probabilistic Curriculum Learning
Llewyn Salt
Marcus Gallagher
47
0
0
09 Apr 2025
PLM-eXplain: Divide and Conquer the Protein Embedding Space
PLM-eXplain: Divide and Conquer the Protein Embedding Space
Jan van Eck
Dea Gogishvili
Wilson Silva
Sanne Abeln
51
0
0
09 Apr 2025
Beware of "Explanations" of AI
Beware of "Explanations" of AI
David Martens
Galit Shmueli
Theodoros Evgeniou
Kevin Bauer
Christian Janiesch
...
Claudia Perlich
Wouter Verbeke
Alona Zharova
Patrick Zschech
F. Provost
70
1
0
09 Apr 2025
Assessing how hyperparameters impact Large Language Models' sarcasm detection performance
Assessing how hyperparameters impact Large Language Models' sarcasm detection performance
Montgomery Gole
Andriy Miranskyy
AI4MH
41
0
0
08 Apr 2025
Explainable AI for building energy retrofitting under data scarcity
Explainable AI for building energy retrofitting under data scarcity
Panagiota Rempi
Sotiris Pelekis
Alexandros-Menelaos Tzortzis
Evangelos Karakolis
Christos Ntanos
D. Askounis
67
1
0
08 Apr 2025
GraphPINE: Graph Importance Propagation for Interpretable Drug Response Prediction
GraphPINE: Graph Importance Propagation for Interpretable Drug Response Prediction
Yoshitaka Inoue
Tianfan Fu
Augustin Luna
39
0
0
07 Apr 2025
Concept Extraction for Time Series with ECLAD-ts
Concept Extraction for Time Series with ECLAD-ts
Antonia Holzapfel
Andres Felipe Posada-Moreno
Sebastian Trimpe
AI4TS
49
0
0
07 Apr 2025
Predicting Survivability of Cancer Patients with Metastatic Patterns Using Explainable AI
Predicting Survivability of Cancer Patients with Metastatic Patterns Using Explainable AI
Polycarp Nalela
Deepthi Rao
Praveen Rao
52
0
0
07 Apr 2025
Hybrid machine learning data assimilation for marine biogeochemistry
Hybrid machine learning data assimilation for marine biogeochemistry
Ieuan Higgs
Ross Bannister
Jozef Skákala
Alberto Carrassi
Stefano Ciavatta
AI4Cl
AI4CE
68
0
0
07 Apr 2025
Exploring Local Interpretable Model-Agnostic Explanations for Speech Emotion Recognition with Distribution-Shift
Exploring Local Interpretable Model-Agnostic Explanations for Speech Emotion Recognition with Distribution-Shift
Maja J. Hjuler
Line H. Clemmensen
Sneha Das
FAtt
65
1
0
07 Apr 2025
Here Comes the Explanation: A Shapley Perspective on Multi-contrast Medical Image Segmentation
Here Comes the Explanation: A Shapley Perspective on Multi-contrast Medical Image Segmentation
Tianyi Ren
Juampablo Heras Rivera
Hitender Oswal
Yutong Pan
Agamdeep Chopra
Jacob Ruzevick
Mehmet Kurt
50
0
0
06 Apr 2025
A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches
A Comparative Study of Explainable AI Methods: Model-Agnostic vs. Model-Specific Approaches
Keerthi Devireddy
37
0
0
05 Apr 2025
Unlocking Neural Transparency: Jacobian Maps for Explainable AI in Alzheimer's Detection
Unlocking Neural Transparency: Jacobian Maps for Explainable AI in Alzheimer's Detection
Yasmine Mustafa
Mohamed Elmahallawy
Tie-Mei Luo
FAtt
107
0
0
04 Apr 2025
Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social Psychological Underpinnings
Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social Psychological Underpinnings
Kaustubh Shivshankar Shejole
Pushpak Bhattacharyya
37
0
0
04 Apr 2025
Interpretable Multimodal Learning for Tumor Protein-Metal Binding: Progress, Challenges, and Perspectives
Interpretable Multimodal Learning for Tumor Protein-Metal Binding: Progress, Challenges, and Perspectives
Xiaokun Liu
Sayedmohammadreza Rastegari
Yijun Huang
Sxe Chang Cheong
Weikang Liu
...
Sina Tabakhi
Xianyuan Liu
Zheqing Zhu
Wei Sang
Haiping Lu
45
0
0
04 Apr 2025
Structured Knowledge Accumulation: The Principle of Entropic Least Action in Forward-Only Neural Learning
Structured Knowledge Accumulation: The Principle of Entropic Least Action in Forward-Only Neural Learning
Bouarfa Mahi Quantiota
60
0
0
04 Apr 2025
Am I Being Treated Fairly? A Conceptual Framework for Individuals to Ascertain Fairness
Am I Being Treated Fairly? A Conceptual Framework for Individuals to Ascertain Fairness
Juliett Suárez Ferreira
Marija Slavkovik
Jorge Casillas
FaML
102
0
0
03 Apr 2025
Geospatial Artificial Intelligence for Satellite-based Flood Extent Mapping: Concepts, Advances, and Future Perspectives
Geospatial Artificial Intelligence for Satellite-based Flood Extent Mapping: Concepts, Advances, and Future Perspectives
Hyunho Lee
Wenwen Li
AI4CE
58
0
0
03 Apr 2025
Engineering Artificial Intelligence: Framework, Challenges, and Future Direction
Engineering Artificial Intelligence: Framework, Challenges, and Future Direction
Jay Lee
Hanqi Su
Dai-Yan Ji
Takanobu Minami
AI4CE
84
0
0
03 Apr 2025
Integrating Identity-Based Identification against Adaptive Adversaries in Federated Learning
Integrating Identity-Based Identification against Adaptive Adversaries in Federated Learning
Jakub Kacper Szelag
Ji-Jian Chin
Lauren Ansell
Sook-Chin Yip
52
0
0
03 Apr 2025
Explainable and Interpretable Forecasts on Non-Smooth Multivariate Time Series for Responsible Gameplay
Explainable and Interpretable Forecasts on Non-Smooth Multivariate Time Series for Responsible Gameplay
Hussain Jagirdar
Rukma Talwadker
Aditya Pareek
Pulkit Agrawal
Tridib Mukherjee
AI4TS
95
1
0
03 Apr 2025
Noiser: Bounded Input Perturbations for Attributing Large Language Models
Noiser: Bounded Input Perturbations for Attributing Large Language Models
Mohammad Reza Ghasemi Madani
Aryo Pradipta Gema
Gabriele Sarti
Yu Zhao
Pasquale Minervini
Andrea Passerini
AAML
80
0
0
03 Apr 2025
Antithetic Sampling for Top-k Shapley Identification
Antithetic Sampling for Top-k Shapley Identification
Patrick Kolpaczki
Tim Nielen
Eyke Hüllermeier
TDI
FAtt
99
0
0
02 Apr 2025
Multivariate Temporal Regression at Scale: A Three-Pillar Framework Combining ML, XAI, and NLP
Multivariate Temporal Regression at Scale: A Three-Pillar Framework Combining ML, XAI, and NLP
Jiztom Kavalakkatt Francis
Matthew J. Darr
46
0
0
02 Apr 2025
Previous
12345...353637
Next