ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.08747
  4. Cited By
IROF: a low resource evaluation metric for explanation methods

IROF: a low resource evaluation metric for explanation methods

9 March 2020
Laura Rieger
Lars Kai Hansen
ArXivPDFHTML

Papers citing "IROF: a low resource evaluation metric for explanation methods"

10 / 10 papers shown
Title
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
94
0
0
30 Dec 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
34
3
0
16 Jul 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
37
5
0
03 May 2024
Feature Attribution with Necessity and Sufficiency via Dual-stage
  Perturbation Test for Causal Explanation
Feature Attribution with Necessity and Sufficiency via Dual-stage Perturbation Test for Causal Explanation
Xuexin Chen
Ruichu Cai
Zhengting Huang
Yuxuan Zhu
Julien Horwood
Zhifeng Hao
Zijian Li
Jose Miguel Hernandez-Lobato
AAML
36
2
0
13 Feb 2024
A comprehensive study on fidelity metrics for XAI
A comprehensive study on fidelity metrics for XAI
Miquel Miró-Nicolau
Antoni Jaume-i-Capó
Gabriel Moyà Alcover
33
11
0
19 Jan 2024
An adversarial attack approach for eXplainable AI evaluation on deepfake
  detection models
An adversarial attack approach for eXplainable AI evaluation on deepfake detection models
Balachandar Gowrisankar
V. Thing
AAML
28
11
0
08 Dec 2023
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Evaluating Feature Attribution Methods in the Image Domain
Evaluating Feature Attribution Methods in the Image Domain
Arne Gevaert
Axel-Jan Rousseau
Thijs Becker
D. Valkenborg
T. D. Bie
Yvan Saeys
FAtt
21
22
0
22 Feb 2022
Evaluating and Aggregating Feature-based Model Explanations
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
30
218
0
01 May 2020
1