ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 651 papers shown
Title
Efficient and Accurate Explanation Estimation with Distribution Compression
Efficient and Accurate Explanation Estimation with Distribution Compression
Hubert Baniecki
Giuseppe Casalicchio
Bernd Bischl
Przemyslaw Biecek
FAtt
46
3
0
26 Jun 2024
Large Language Models are Interpretable Learners
Large Language Models are Interpretable Learners
Ruochen Wang
Si Si
Felix X. Yu
Dorothea Wiesmann
Cho-Jui Hsieh
Inderjit Dhillon
24
3
0
25 Jun 2024
DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation
DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation
Yingying Fang
Shuang Wu
Zihao Jin
Caiwen Xu
Shiyi Wang
Simon Walsh
Guang Yang
MedIm
34
4
0
21 Jun 2024
MiSuRe is all you need to explain your image segmentation
MiSuRe is all you need to explain your image segmentation
Syed Nouman Hasany
Fabrice Mériaudeau
Caroline Petitjean
28
2
0
18 Jun 2024
Inpainting the Gaps: A Novel Framework for Evaluating Explanation
  Methods in Vision Transformers
Inpainting the Gaps: A Novel Framework for Evaluating Explanation Methods in Vision Transformers
Lokesh Badisa
Sumohana S. Channappayya
42
0
0
17 Jun 2024
Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial
  Analysis
Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial Analysis
Prithwijit Chowdhury
M. Prabhushankar
Ghassan AlRegib
Mohamed Deriche
33
0
0
12 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
46
2
0
11 Jun 2024
Explaining Representation Learning with Perceptual Components
Explaining Representation Learning with Perceptual Components
Yavuz Yarici
Kiran Kokilepersaud
M. Prabhushankar
Ghassan AlRegib
SSL
FAtt
26
0
0
11 Jun 2024
Understanding Inhibition Through Maximally Tense Images
Understanding Inhibition Through Maximally Tense Images
Chris Hamblin
Srijani Saha
Talia Konkle
George Alvarez
FAtt
32
0
0
08 Jun 2024
Leveraging Activations for Superpixel Explanations
Leveraging Activations for Superpixel Explanations
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
AAML
FAtt
XAI
29
0
0
07 Jun 2024
Tensor Polynomial Additive Model
Tensor Polynomial Additive Model
Yang Chen
Ce Zhu
Jiani Liu
Yipeng Liu
TPM
25
0
0
05 Jun 2024
Expected Grad-CAM: Towards gradient faithfulness
Expected Grad-CAM: Towards gradient faithfulness
Vincenzo Buono
Peyman Sheikholharam Mashhadi
M. Rahat
Prayag Tiwari
Stefan Byttner
FAtt
31
1
0
03 Jun 2024
How Video Meetings Change Your Expression
How Video Meetings Change Your Expression
Sumit Sarin
Utkarsh Mall
Purva Tendulkar
Carl Vondrick
CVBM
32
0
0
03 Jun 2024
VOICE: Variance of Induced Contrastive Explanations to quantify
  Uncertainty in Neural Network Interpretability
VOICE: Variance of Induced Contrastive Explanations to quantify Uncertainty in Neural Network Interpretability
M. Prabhushankar
Ghassan AlRegib
FAtt
UQCV
29
2
0
01 Jun 2024
Listenable Maps for Zero-Shot Audio Classifiers
Listenable Maps for Zero-Shot Audio Classifiers
Francesco Paissan
Luca Della Libera
Mirco Ravanelli
Cem Subakan
32
4
0
27 May 2024
SE3D: A Framework For Saliency Method Evaluation In 3D Imaging
SE3D: A Framework For Saliency Method Evaluation In 3D Imaging
Mariusz Wi'sniewski
Loris Giulivi
Giacomo Boracchi
29
1
0
23 May 2024
Concept Visualization: Explaining the CLIP Multi-modal Embedding Using
  WordNet
Concept Visualization: Explaining the CLIP Multi-modal Embedding Using WordNet
Loris Giulivi
Giacomo Boracchi
21
2
0
23 May 2024
Explaining Black-box Model Predictions via Two-level Nested Feature
  Attributions with Consistency Property
Explaining Black-box Model Predictions via Two-level Nested Feature Attributions with Consistency Property
Yuya Yoshikawa
Masanari Kimura
Ryotaro Shimizu
Yuki Saito
FAtt
27
0
0
23 May 2024
Hierarchical Salient Patch Identification for Interpretable Fundus Disease Localization
Hierarchical Salient Patch Identification for Interpretable Fundus Disease Localization
Yitao Peng
Lianghua He
D. Hu
FAtt
51
0
0
23 May 2024
Part-based Quantitative Analysis for Heatmaps
Part-based Quantitative Analysis for Heatmaps
Osman Tursun
Sinan Kalkan
Simon Denman
S. Sridharan
Clinton Fookes
35
0
0
22 May 2024
FFAM: Feature Factorization Activation Map for Explanation of 3D
  Detectors
FFAM: Feature Factorization Activation Map for Explanation of 3D Detectors
Shuai Liu
Boyang Li
Zhiyu Fang
Mingyue Cui
Kai Huang
40
0
0
21 May 2024
Improving the Explain-Any-Concept by Introducing Nonlinearity to the
  Trainable Surrogate Model
Improving the Explain-Any-Concept by Introducing Nonlinearity to the Trainable Surrogate Model
Mounes Zaval
Sedat Ozer
LRM
24
0
0
20 May 2024
Explainable Facial Expression Recognition for People with Intellectual
  Disabilities
Explainable Facial Expression Recognition for People with Intellectual Disabilities
Silvia Ramis Guarinos
Cristina Manresa Yee
Jose Maria Buades Rubio
F. X. Gaya-Morey
CVBM
24
3
0
19 May 2024
Faithful Attention Explainer: Verbalizing Decisions Based on
  Discriminative Features
Faithful Attention Explainer: Verbalizing Decisions Based on Discriminative Features
Yao Rong
David Scheerer
Enkelejda Kasneci
45
0
0
16 May 2024
Parallel Backpropagation for Shared-Feature Visualization
Parallel Backpropagation for Shared-Feature Visualization
Alexander Lappe
Anna Bognár
Ghazaleh Ghamkhari Nejad
A. Mukovskiy
Lucas M. Martini
Martin A. Giese
Rufin Vogels
FAtt
18
0
0
16 May 2024
To Trust or Not to Trust: Towards a novel approach to measure trust for
  XAI systems
To Trust or Not to Trust: Towards a novel approach to measure trust for XAI systems
Miquel Miró-Nicolau
Gabriel Moyà Alcover
Antoni Jaume-i-Capó
Manuel González Hidalgo
Maria Gemma Sempere Campello
Juan Antonio Palmer Sancho
21
0
0
09 May 2024
Explainable Interface for Human-Autonomy Teaming: A Survey
Explainable Interface for Human-Autonomy Teaming: A Survey
Xiangqi Kong
Yang Xing
Antonios Tsourdos
Ziyue Wang
Weisi Guo
Adolfo Perrusquía
Andreas Wikander
37
3
0
04 May 2024
Explaining models relating objects and privacy
Explaining models relating objects and privacy
Alessio Xompero
Myriam Bontonou
J. Arbona
Emmanouil Benetos
Andrea Cavallaro
25
1
0
02 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
61
5
0
02 May 2024
Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of
  Attribution Methods
Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of Attribution Methods
Peiyu Yang
Naveed Akhtar
Jiantong Jiang
Ajmal Saeed Mian
XAI
32
2
0
02 May 2024
Reliable or Deceptive? Investigating Gated Features for Smooth Visual
  Explanations in CNNs
Reliable or Deceptive? Investigating Gated Features for Smooth Visual Explanations in CNNs
Soham Mitra
Atri Sukul
S. K. Roy
Pravendra Singh
Vinay K. Verma
AAML
FAtt
38
0
0
30 Apr 2024
Statistics and explainability: a fruitful alliance
Statistics and explainability: a fruitful alliance
Valentina Ghidini
19
0
0
30 Apr 2024
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Flow AM: Generating Point Cloud Global Explanations by Latent Alignment
Hanxiao Tan
37
1
0
29 Apr 2024
Towards Quantitative Evaluation of Explainable AI Methods for Deepfake
  Detection
Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection
K. Tsigos
Evlampios Apostolidis
Spyridon Baxevanakis
Symeon Papadopoulos
Vasileios Mezaris
AAML
41
7
0
29 Apr 2024
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
56
2
0
25 Apr 2024
Guided AbsoluteGrad: Magnitude of Gradients Matters to Explanation's
  Localization and Saliency
Guided AbsoluteGrad: Magnitude of Gradients Matters to Explanation's Localization and Saliency
Jun Huang
Yan Liu
FAtt
47
0
0
23 Apr 2024
A Learning Paradigm for Interpretable Gradients
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
18
0
0
23 Apr 2024
CA-Stream: Attention-based pooling for interpretable image recognition
CA-Stream: Attention-based pooling for interpretable image recognition
Felipe Torres
Hanwei Zhang
R. Sicre
Stéphane Ayache
Yannis Avrithis
50
0
0
23 Apr 2024
Efficient and Concise Explanations for Object Detection with
  Gaussian-Class Activation Mapping Explainer
Efficient and Concise Explanations for Object Detection with Gaussian-Class Activation Mapping Explainer
Quoc Khanh Nguyen
Truong Thanh Hung Nguyen
V. Nguyen
Van Binh Truong
Tuong Phan
Hung Cao
21
6
0
20 Apr 2024
COIN: Counterfactual inpainting for weakly supervised semantic
  segmentation for medical images
COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images
Dmytro Shvetsov
Joonas Ariva
M. Domnich
Raul Vicente
Dmytro Fishman
MedIm
32
0
0
19 Apr 2024
Toward Understanding the Disagreement Problem in Neural Network Feature
  Attribution
Toward Understanding the Disagreement Problem in Neural Network Feature Attribution
Niklas Koenen
Marvin N. Wright
FAtt
39
5
0
17 Apr 2024
Evolving Interpretable Visual Classifiers with Large Language Models
Evolving Interpretable Visual Classifiers with Large Language Models
Mia Chiquier
Utkarsh Mall
Carl Vondrick
VLM
30
10
0
15 Apr 2024
Observation-specific explanations through scattered data approximation
Observation-specific explanations through scattered data approximation
Valentina Ghidini
Michael Multerer
Jacopo Quizi
Rohan Sen
FAtt
16
1
0
12 Apr 2024
LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity
Walid Bousselham
Angie Boggust
Sofian Chaybouti
Hendrik Strobelt
Hilde Kuehne
93
10
0
04 Apr 2024
Accurate estimation of feature importance faithfulness for tree models
Accurate estimation of feature importance faithfulness for tree models
Mateusz Gajewski
Adam Karczmarz
Mateusz Rapicki
Piotr Sankowski
37
0
0
04 Apr 2024
Smooth Deep Saliency
Smooth Deep Saliency
Rudolf Herdt
Maximilian Schmidt
Daniel Otero Baguer
Peter Maass
MedIm
FAtt
18
0
0
02 Apr 2024
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Mihir Mulye
Matias Valdenegro-Toro
UQCV
FAtt
33
0
0
25 Mar 2024
Forward Learning for Gradient-based Black-box Saliency Map Generation
Forward Learning for Gradient-based Black-box Saliency Map Generation
Zeliang Zhang
Mingqian Feng
Jinyang Jiang
Rongyi Zhu
Yijie Peng
Chenliang Xu
FAtt
32
2
0
22 Mar 2024
Listenable Maps for Audio Classifiers
Listenable Maps for Audio Classifiers
Francesco Paissan
Mirco Ravanelli
Cem Subakan
30
7
0
19 Mar 2024
XPose: eXplainable Human Pose Estimation
XPose: eXplainable Human Pose Estimation
Luyu Qiu
Jianing Li
Lei Wen
Chi Su
Fei Hao
C. Zhang
Lei Chen
30
0
0
19 Mar 2024
Previous
123456...121314
Next