ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.08747
  4. Cited By
IROF: a low resource evaluation metric for explanation methods

IROF: a low resource evaluation metric for explanation methods

9 March 2020
Laura Rieger
Lars Kai Hansen
ArXiv (abs)PDFHTML

Papers citing "IROF: a low resource evaluation metric for explanation methods"

40 / 40 papers shown
Value bounds and Convergence Analysis for Averages of LRP attributions
Value bounds and Convergence Analysis for Averages of LRP attributions
Alexander Binder
Nastaran Takmil-Homayouni
Ürün Dogan
FAtt
298
0
0
10 Sep 2025
DeepFaith: A Domain-Free and Model-Agnostic Unified Framework for Highly Faithful Explanations
DeepFaith: A Domain-Free and Model-Agnostic Unified Framework for Highly Faithful Explanations
Yuhan Guo
Lizhong Ding
Shihan Jia
Yanyu Ren
P. Li
Jiarun Fu
Changsheng Li
Ye Yuan
Guoren Wang
232
0
0
05 Aug 2025
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene ClassificationIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (IEEE J-STARS), 2025
Jonas Klotz
Tom Burgert
Tim Siebert
447
4
0
08 Jul 2025
Metric-Guided Synthesis of Class Activation Mapping
Metric-Guided Synthesis of Class Activation Mapping
Alejandro Luque-Cerpa
Elizabeth Polgreen
Ajitha Rajan
Hazem Torfah
304
0
0
14 Apr 2025
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-beingInternational Conference on Evaluation of Novel Approaches to Software Engineering (ENASE), 2025
Esperança Amengual-Alcover
Antoni Jaume-i-Capó
Miquel Miró-Nicolau
Gabriel Moyà Alcover
Antonia Paniza-Fullana
304
2
0
11 Apr 2025
Fast Fourier Correlation is a Highly Efficient and Accurate Feature Attribution Algorithm from the Perspective of Control Theory and Game Theory
Fast Fourier Correlation is a Highly Efficient and Accurate Feature Attribution Algorithm from the Perspective of Control Theory and Game Theory
Zechen Liu
Feiyang Zhang
Wei Song
Xuelong Li
Wei Wei
FAtt
483
0
0
02 Apr 2025
Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality Distribution
Evaluate with the Inverse: Efficient Approximation of Latent Explanation Quality DistributionAAAI Conference on Artificial Intelligence (AAAI), 2025
Carlos Eiras-Franco
Anna Hedström
Marina M.-C. Höhne
XAI
276
0
0
24 Feb 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and MetricsNeural Information Processing Systems (NeurIPS), 2024
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAIELM
746
22
0
03 Jan 2025
Advancing Attribution-Based Neural Network Explainability through
  Relative Absolute Magnitude Layer-Wise Relevance Propagation and
  Multi-Component Evaluation
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component EvaluationACM Transactions on Intelligent Systems and Technology (ACM TIST), 2024
Davor Vukadin
Petar Afrić
Marin Šilić
Goran Delač
FAtt
309
2
0
12 Dec 2024
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
From Flexibility to Manipulation: The Slippery Slope of XAI Evaluation
Kristoffer Wickstrøm
Marina M.-C. Höhne
Anna Hedström
AAML
482
6
0
07 Dec 2024
Benchmarking XAI Explanations with Human-Aligned Evaluations
Benchmarking XAI Explanations with Human-Aligned Evaluations
Rémi Kazmierczak
Steve Azzolin
Eloise Berthier
Anna Hedström
Patricia Delhomme
...
Goran Frehse
Baptiste Caramiaux
Baptiste Caramiaux
Andrea Passerini
Gianni Franchi
546
7
0
04 Nov 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
384
5
0
16 Jul 2024
Challenges in explaining deep learning models for data with biological
  variation
Challenges in explaining deep learning models for data with biological variationPLoS ONE (PLoS ONE), 2024
Lenka Tětková
E. Dreier
Robin Malm
Lars Kai Hansen
AAML
322
1
0
14 Jun 2024
Expected Grad-CAM: Towards gradient faithfulness
Expected Grad-CAM: Towards gradient faithfulness
Vincenzo Buono
Peyman Sheikholharam Mashhadi
M. Rahat
Prayag Tiwari
Stefan Byttner
FAtt
312
4
0
03 Jun 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAttLRM
352
16
0
03 May 2024
Feature Attribution with Necessity and Sufficiency via Dual-stage
  Perturbation Test for Causal Explanation
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation
Xuexin Chen
Ruichu Cai
Zhengting Huang
Yuxuan Zhu
Julien Horwood
Zhifeng Hao
Zijian Li
Jose Miguel Hernandez-Lobato
AAML
567
5
0
13 Feb 2024
A comprehensive study on fidelity metrics for XAI
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
271
41
0
19 Jan 2024
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI
  Benchmarks
Decoupling Pixel Flipping and Occlusion Strategy for Consistent XAI Benchmarks
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
AAML
380
39
0
12 Jan 2024
Sanity Checks Revisited: An Exploration to Repair the Model Parameter
  Randomisation Test
Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
LRM
350
10
0
12 Jan 2024
An adversarial attack approach for eXplainable AI evaluation on deepfake
  detection models
An adversarial attack approach for eXplainable AI evaluation on deepfake detection modelsComputers & security (CS), 2023
Balachandar Gowrisankar
V. Thing
AAML
185
28
0
08 Dec 2023
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with
  Ground Truth Explanations Datasets
Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with Ground Truth Explanations Datasets
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
XAI
406
26
0
03 Nov 2023
Explanation-based Training with Differentiable Insertion/Deletion
  Metric-aware Regularizers
Explanation-based Training with Differentiable Insertion/Deletion Metric-aware RegularizersInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2023
Yuya Yoshikawa
Tomoharu Iwata
324
2
0
19 Oct 2023
Harmonizing Feature Attributions Across Deep Learning Architectures:
  Enhancing Interpretability and Consistency
Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and ConsistencyDeutsche Jahrestagung für Künstliche Intelligenz (KI), 2023
Md Abdul Kadir
G. Addluri
Daniel Sonntag
FAtt
359
2
0
05 Jul 2023
Causal Analysis for Robust Interpretability of Neural Networks
Causal Analysis for Robust Interpretability of Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2023
Ola Ahmad
Nicolas Béreux
Loïc Baret
V. Hashemi
Freddy Lecue
CML
413
13
0
15 May 2023
Robustness of Visual Explanations to Common Data Augmentation
Robustness of Visual Explanations to Common Data Augmentation
Lenka Tětková
Lars Kai Hansen
AAML
271
6
0
18 Apr 2023
Feature Perturbation Augmentation for Reliable Evaluation of Importance
  Estimators in Neural Networks
Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators in Neural NetworksPattern Recognition Letters (PR), 2023
L. Brocki
N. C. Chung
FAttAAML
338
21
0
02 Mar 2023
Finding the right XAI method -- A Guide for the Evaluation and Ranking
  of Explainable AI Methods in Climate Science
Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate ScienceArtificial Intelligence for the Earth Systems (AI4ES), 2023
P. Bommer
M. Kretschmer
Anna Hedström
Dilyara Bareeva
Marina M.-C. Höhne
414
63
0
01 Mar 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
364
35
0
14 Feb 2023
Relational Local Explanations
Relational Local Explanations
V. Borisov
Gjergji Kasneci
FAtt
302
0
0
23 Dec 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAIFAttAAML
306
10
0
09 Nov 2022
Fidelity of Ensemble Aggregation for Saliency Map Explanations using
  Bayesian Optimization Techniques
Fidelity of Ensemble Aggregation for Saliency Map Explanations using Bayesian Optimization Techniques
Yannik Mahlau
Christian Nolde
FAtt
380
1
0
04 Jul 2022
Xplique: A Deep Learning Explainability Toolbox
Xplique: A Deep Learning Explainability Toolbox
Thomas Fel
Lucas Hervier
David Vigouroux
Antonin Poché
Justin Plakoo
...
Agustin Picard
C. Nicodeme
Laurent Gardes
G. Flandin
Thomas Serre
255
47
0
09 Jun 2022
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image DomainMachine-mediated learning (ML), 2022
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
191
29
0
22 Feb 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural
  Network Explanations and Beyond
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and BeyondJournal of machine learning research (JMLR), 2022
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAIELM
449
245
0
14 Feb 2022
RELAX: Representation Learning Explainability
RELAX: Representation Learning ExplainabilityInternational Journal of Computer Vision (IJCV), 2021
Kristoffer Wickstrøm
Daniel J. Trosten
Sigurd Løkse
Ahcène Boubekki
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
FAtt
234
19
0
19 Dec 2021
A Robust Unsupervised Ensemble of Feature-Based Explanations using
  Restricted Boltzmann Machines
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
V. Borisov
Johannes Meier
J. V. D. Heuvel
Hamed Jalali
Gjergji Kasneci
FAtt
194
5
0
14 Nov 2021
Spatio-Temporal Perturbations for Video Attribution
Spatio-Temporal Perturbations for Video Attribution
Zhenqiang Li
Weimin Wang
Zuoyue Li
Yifei Huang
Yoichi Sato
160
9
0
01 Sep 2021
How Good is your Explanation? Algorithmic Stability Measures to Assess
  the Quality of Explanations for Deep Neural Networks
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural NetworksIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAIFAtt
520
37
0
07 Sep 2020
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model ExplanationsInternational Joint Conference on Artificial Intelligence (IJCAI), 2020
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
384
287
0
01 May 2020
Aggregating explanation methods for stable and robust explainability
Aggregating explanation methods for stable and robust explainability
Laura Rieger
Lars Kai Hansen
AAMLFAtt
444
15
0
01 Mar 2019
1
Page 1 of 1