ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.14944
  4. Cited By
The effectiveness of feature attribution methods and its correlation
  with automatic evaluation scores
v1v2v3v4v5 (latest)

The effectiveness of feature attribution methods and its correlation with automatic evaluation scores

Neural Information Processing Systems (NeurIPS), 2021
31 May 2021
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
    FAtt
ArXiv (abs)PDFHTML

Papers citing "The effectiveness of feature attribution methods and its correlation with automatic evaluation scores"

50 / 64 papers shown
FACE: Faithful Automatic Concept Extraction
FACE: Faithful Automatic Concept Extraction
Dipkamal Bhusal
Michael Clifford
Sara Rampazzi
Nidhi Rastogi
CVBM
140
1
0
13 Oct 2025
Learning Causal Structure Distributions for Robust Planning
Learning Causal Structure Distributions for Robust PlanningIEEE Robotics and Automation Letters (IEEE RA-L), 2025
Alejandro Murillo-Gonzalez
Junhong Xu
Lantao Liu
CML
196
1
0
08 Aug 2025
Comprehensive Evaluation of Prototype Neural Networks
Comprehensive Evaluation of Prototype Neural Networks
Philipp Schlinge
Steffen Meinert
Martin Atzmueller
248
2
0
09 Jul 2025
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual UsersEuropean Conference on Information Systems (ECIS), 2025
Julian Rosenberger
Philipp Schröppel
Sven Kruschel
Mathias Kraus
Patrick Zschech
Maximilian Förster
FAtt
325
0
0
11 May 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAttXAI
399
6
0
23 Apr 2025
Measuring the (Un)Faithfulness of Concept-Based Explanations
Measuring the (Un)Faithfulness of Concept-Based Explanations
Shubham Kumar
Dwip Dalal
528
0
0
15 Apr 2025
Interactive Medical Image Analysis with Concept-based Similarity ReasoningComputer Vision and Pattern Recognition (CVPR), 2025
Ta Duc Huy
Sen Kim Tran
Phan Nguyen
Nguyen Hoang Tran
Tran Bao Sam
Anton Van Den Hengel
Zhibin Liao
Johan Verjans
Minh-Son To
Vu Minh Hieu Phan
312
7
0
10 Mar 2025
Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models
Archetypal SAE: Adaptive and Stable Dictionary Learning for Concept Extraction in Large Vision Models
Thomas Fel
Ekdeep Singh Lubana
Jacob S. Prince
M. Kowal
Victor Boutin
Isabel Papadimitriou
Binxu Wang
Martin Wattenberg
Demba Ba
Talia Konkle
304
28
0
18 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
349
24
0
06 Feb 2025
Regulation of Language Models With Interpretability Will Likely Result
  In A Performance Trade-Off
Regulation of Language Models With Interpretability Will Likely Result In A Performance Trade-Off
Eoin M. Kenny
Julie A. Shah
306
1
0
12 Dec 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based ExplanationsEuropean Conference on Computer Vision (ECCV), 2024
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Weilong Dai
Mingli Song
XAI
196
5
0
28 Jul 2024
They Look Like Each Other: Case-based Reasoning for Explainable
  Depression Detection on Twitter using Large Language Models
They Look Like Each Other: Case-based Reasoning for Explainable Depression Detection on Twitter using Large Language Models
Mohammad Saeid Mahdavinejad
Peyman Adibi
A. Monadjemi
Pascal Hitzler
282
1
0
21 Jul 2024
Understanding Visual Feature Reliance through the Lens of Complexity
Understanding Visual Feature Reliance through the Lens of Complexity
Thomas Fel
Louis Bethune
Andrew Kyle Lampinen
Thomas Serre
Katherine Hermann
FAttCoGe
264
15
0
08 Jul 2024
SLIM: Spuriousness Mitigation with Minimal Human Annotations
SLIM: Spuriousness Mitigation with Minimal Human Annotations
Xiwei Xuan
Ziquan Deng
Hsuan-Tien Lin
Kwan-Liu Ma
207
8
0
08 Jul 2024
Selecting Interpretability Techniques for Healthcare Machine Learning
  models
Selecting Interpretability Techniques for Healthcare Machine Learning models
Daniel Sierra-Botero
Ana Molina-Taborda
Mario S. Valdés-Tresanco
Alejandro Hernández-Arango
Leonardo Espinosa-Leal
Alexander Karpenko
O. Lopez-Acevedo
221
1
0
14 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAIFAtt
319
2
0
11 Jun 2024
Part-based Quantitative Analysis for Heatmaps
Part-based Quantitative Analysis for Heatmaps
Osman Tursun
Sinan Kalkan
Akila Pemasiri
Sridha Sridharan
Clinton Fookes
257
0
0
22 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A SurveyICT express (IE), 2024
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
333
30
0
02 May 2024
Allowing humans to interactively guide machines where to look does not
  always improve human-AI team's classification accuracy
Allowing humans to interactively guide machines where to look does not always improve human-AI team's classification accuracy
Giang Nguyen
Mohammad Reza Taesiri
Sunnie S. Y. Kim
Anh Totti Nguyen
HAIAAMLFAtt
419
8
0
08 Apr 2024
How explainable AI affects human performance: A systematic review of the
  behavioural consequences of saliency maps
How explainable AI affects human performance: A systematic review of the behavioural consequences of saliency mapsInternational journal of human computer interactions (IJHCI), 2024
Romy Müller
HAI
257
19
0
03 Apr 2024
Feature Accentuation: Revealing 'What' Features Respond to in Natural
  Images
Feature Accentuation: Revealing 'What' Features Respond to in Natural Images
Christopher Hamblin
Thomas Fel
Srijani Saha
Talia Konkle
George A. Alvarez
FAtt
394
5
0
15 Feb 2024
Keep the Faith: Faithful Explanations in Convolutional Neural Networks
  for Case-Based Reasoning
Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based ReasoningAAAI Conference on Artificial Intelligence (AAAI), 2023
Tom Nuno Wolf
Fabian Bongratz
Anne-Marie Rickmann
Sebastian Polsterl
Christian Wachinger
AAMLFAtt
260
8
0
15 Dec 2023
Interpretability is in the eye of the beholder: Human versus artificial
  classification of image segments generated by humans versus XAI
Interpretability is in the eye of the beholder: Human versus artificial classification of image segments generated by humans versus XAIInternational journal of human computer interactions (IJHCI), 2023
Romy Müller
Marius Thoss
Julian Ullrich
Steffen Seitz
Carsten Knoll
241
4
0
21 Nov 2023
Instance Segmentation under Occlusions via Location-aware Copy-Paste
  Data Augmentation
Instance Segmentation under Occlusions via Location-aware Copy-Paste Data Augmentation
Son Nguyen
Mikel Lainsa
Hung Dao
Daeyoung Kim
Giang Nguyen
865
1
0
27 Oct 2023
May I Ask a Follow-up Question? Understanding the Benefits of
  Conversations in Neural Network Explainability
May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network ExplainabilityInternational journal of human computer interactions (IJHCI), 2023
Tong Zhang
Xiaoyu Yang
Boyang Albert Li
279
4
0
25 Sep 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
596
12
0
14 Sep 2023
PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained
  Image Classification Accuracy for AIs and Humans
PCNN: Probable-Class Nearest-Neighbor Explanations Improve Fine-Grained Image Classification Accuracy for AIs and Humans
Giang Nguyen
Valerie Chen
Mohammad Reza Taesiri
Anh Totti Nguyen
327
4
0
25 Aug 2023
A Dual-Perspective Approach to Evaluating Feature Attribution Methods
A Dual-Perspective Approach to Evaluating Feature Attribution Methods
Yawei Li
Yanglin Zhang
Kenji Kawaguchi
Ashkan Khakzar
B. Bischl
Mina Rezaei
FAttXAI
194
0
0
17 Aug 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
  Explainable AI Methods
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI MethodsIEEE International Conference on Computer Vision (ICCV), 2023
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
264
45
0
11 Aug 2023
Precise Benchmarking of Explainable AI Attribution Methods
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
161
4
0
06 Aug 2023
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image
  Classifiers
The Co-12 Recipe for Evaluating Interpretable Part-Prototype Image Classifiers
Meike Nauta
Christin Seifert
264
16
0
26 Jul 2023
The Impact of Imperfect XAI on Human-AI Decision-Making
The Impact of Imperfect XAI on Human-AI Decision-Making
Katelyn Morrison
Philipp Spitzer
Violet Turri
Michelle C. Feng
Niklas Kühl
Adam Perer
357
56
0
25 Jul 2023
Saliency strikes back: How filtering out high frequencies improves
  white-box explanations
Saliency strikes back: How filtering out high frequencies improves white-box explanationsInternational Conference on Machine Learning (ICML), 2023
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
FAtt
446
3
0
18 Jul 2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude
  Constrained Optimization
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained OptimizationNeural Information Processing Systems (NeurIPS), 2023
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
331
29
0
11 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept
  Importance Estimation
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance EstimationNeural Information Processing Systems (NeurIPS), 2023
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
332
84
0
11 Jun 2023
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World
  Computer Vision Application
Humans, AI, and Context: Understanding End-Users' Trust in a Real-World Computer Vision ApplicationConference on Fairness, Accountability and Transparency (FAccT), 2023
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
209
44
0
15 May 2023
In Search of Verifiability: Explanations Rarely Enable Complementary
  Performance in AI-Advised Decision Making
In Search of Verifiability: Explanations Rarely Enable Complementary Performance in AI-Advised Decision MakingThe AI Magazine (AI Mag.), 2023
Raymond Fok
Daniel S. Weld
362
83
0
12 May 2023
Explaining RL Decisions with Trajectories
Explaining RL Decisions with TrajectoriesInternational Conference on Learning Representations (ICLR), 2023
Shripad Deshmukh
Arpan Dasgupta
Balaji Krishnamurthy
Nan Jiang
Chirag Agarwal
Georgios Theocharous
J. Subramanian
OffRL
204
7
0
06 May 2023
Neighboring Words Affect Human Interpretation of Saliency Explanations
Neighboring Words Affect Human Interpretation of Saliency ExplanationsAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Tim Dockhorn
Yaoliang Yu
Heike Adel
Mahdi Zolnouri
V. Nia
FAttMILM
242
6
0
04 May 2023
Mitigating Spurious Correlations in Multi-modal Models during
  Fine-tuning
Mitigating Spurious Correlations in Multi-modal Models during Fine-tuningInternational Conference on Machine Learning (ICML), 2023
Yu Yang
Besmira Nushi
Hamid Palangi
Baharan Mirzasoleiman
269
58
0
08 Apr 2023
Why is plausibility surprisingly problematic as an XAI criterion?
Why is plausibility surprisingly problematic as an XAI criterion?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
367
10
0
30 Mar 2023
Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability
Analyzing Effects of Mixed Sample Data Augmentation on Model InterpretabilityNeural Networks (Neural Netw.), 2023
Soyoun Won
Sung-Ho Bae
Seong Tae Kim
221
2
0
26 Mar 2023
Human-AI Collaboration: The Effect of AI Delegation on Human Task
  Performance and Task Satisfaction
Human-AI Collaboration: The Effect of AI Delegation on Human Task Performance and Task SatisfactionInternational Conference on Intelligent User Interfaces (IUI), 2023
Patrick Hemmer
Monika Westphal
Max Schemmer
S. Vetter
Michael Vossing
G. Satzger
245
73
0
16 Mar 2023
Learning Human-Compatible Representations for Case-Based Decision
  Support
Learning Human-Compatible Representations for Case-Based Decision SupportInternational Conference on Learning Representations (ICLR), 2023
Han Liu
Yizhou Tian
Chacha Chen
Shi Feng
Yuxin Chen
Chenhao Tan
189
6
0
06 Mar 2023
Invisible Users: Uncovering End-Users' Requirements for Explainable AI
  via Explanation Forms and Goals
Invisible Users: Uncovering End-Users' Requirements for Explainable AI via Explanation Forms and Goals
Weina Jin
Jianyu Fan
D. Gromala
Philippe Pasquier
Ghassan Hamarneh
256
9
0
10 Feb 2023
Red Teaming Deep Neural Networks with Feature Synthesis Tools
Red Teaming Deep Neural Networks with Feature Synthesis ToolsNeural Information Processing Systems (NeurIPS), 2023
Stephen Casper
Yuxiao Li
Jiawei Li
Tong Bu
Ke Zhang
K. Hariharan
Dylan Hadfield-Menell
AAML
401
21
0
08 Feb 2023
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious
  Correlation
Post hoc Explanations may be Ineffective for Detecting Unknown Spurious CorrelationInternational Conference on Learning Representations (ICLR), 2022
Julius Adebayo
M. Muelly
H. Abelson
Been Kim
240
93
0
09 Dec 2022
Overcoming Catastrophic Forgetting by XAI
Overcoming Catastrophic Forgetting by XAI
Giang Nguyen
194
0
0
25 Nov 2022
OCTET: Object-aware Counterfactual Explanations
OCTET: Object-aware Counterfactual ExplanationsComputer Vision and Pattern Recognition (CVPR), 2022
Mehdi Zemni
Mickaël Chen
Éloi Zablocki
H. Ben-younes
Patrick Pérez
Matthieu Cord
AAML
305
35
0
22 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for ExplainabilityComputer Vision and Pattern Recognition (CVPR), 2022
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
355
168
0
17 Nov 2022
12
Next