ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 651 papers shown
Title
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly
  Attribution
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution
T. Idé
Naoki Abe
38
4
0
09 Aug 2023
From Fake to Real: Pretraining on Balanced Synthetic Images to Prevent
  Spurious Correlations in Image Recognition
From Fake to Real: Pretraining on Balanced Synthetic Images to Prevent Spurious Correlations in Image Recognition
Maan Qraitem
Kate Saenko
Bryan A. Plummer
45
4
0
08 Aug 2023
Precise Benchmarking of Explainable AI Attribution Methods
Precise Benchmarking of Explainable AI Attribution Methods
Rafael Brandt
Daan Raatjens
G. Gaydadjiev
XAI
24
4
0
06 Aug 2023
Evaluating Link Prediction Explanations for Graph Neural Networks
Evaluating Link Prediction Explanations for Graph Neural Networks
Claudio Borile
Alan Perotti
Andre' Panisson
FAtt
38
2
0
03 Aug 2023
Beyond One-Hot-Encoding: Injecting Semantics to Drive Image Classifiers
Beyond One-Hot-Encoding: Injecting Semantics to Drive Image Classifiers
Alan Perotti
Simone Bertolotto
Eliana Pastor
Andre' Panisson
13
3
0
01 Aug 2023
Saliency strikes back: How filtering out high frequencies improves
  white-box explanations
Saliency strikes back: How filtering out high frequencies improves white-box explanations
Sabine Muzellec
Thomas Fel
Victor Boutin
Léo Andéol
R. V. Rullen
Thomas Serre
FAtt
22
0
0
18 Jul 2023
On the Connection between Game-Theoretic Feature Attributions and
  Counterfactual Explanations
On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations
Emanuele Albini
Shubham Sharma
Saumitra Mishra
Danial Dervovic
Daniele Magazzeni
FAtt
46
2
0
13 Jul 2023
A large calcium-imaging dataset reveals a systematic V4 organization for
  natural scenes
A large calcium-imaging dataset reveals a systematic V4 organization for natural scenes
Tian-Yi Wang
Haoxuan Yao
T. Lee
Jiayi Hong
Yang Li
Hongfei Jiang
I. Andolina
Shiming Tang
19
1
0
03 Jul 2023
Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing
Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing
Ariel N. Lee
Sarah Adel Bargal
Janavi Kasera
Stan Sclaroff
Kate Saenko
Nataniel Ruiz
33
0
0
30 Jun 2023
Evaluating the overall sensitivity of saliency-based explanation methods
Evaluating the overall sensitivity of saliency-based explanation methods
Harshinee Sriram
Cristina Conati
AAML
XAI
FAtt
19
0
0
21 Jun 2023
B-cos Alignment for Inherently Interpretable CNNs and Vision
  Transformers
B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers
Moritz D Boehle
Navdeeppal Singh
Mario Fritz
Bernt Schiele
56
27
0
19 Jun 2023
Rosetta Neurons: Mining the Common Units in a Model Zoo
Rosetta Neurons: Mining the Common Units in a Model Zoo
Amil Dravid
Yossi Gandelsman
Alexei A. Efros
Assaf Shocher
17
25
0
15 Jun 2023
On the Robustness of Removal-Based Feature Attributions
On the Robustness of Removal-Based Feature Attributions
Christy Lin
Ian Covert
Su-In Lee
22
12
0
12 Jun 2023
Unlocking Feature Visualization for Deeper Networks with MAgnitude
  Constrained Optimization
Unlocking Feature Visualization for Deeper Networks with MAgnitude Constrained Optimization
Thomas Fel
Thibaut Boissin
Victor Boutin
Agustin Picard
Paul Novello
...
Drew Linsley
Tom Rousseau
Rémi Cadène
Laurent Gardes
Thomas Serre
FAtt
16
18
0
11 Jun 2023
A Holistic Approach to Unifying Automatic Concept Extraction and Concept
  Importance Estimation
A Holistic Approach to Unifying Automatic Concept Extraction and Concept Importance Estimation
Thomas Fel
Victor Boutin
Mazda Moayeri
Rémi Cadène
Louis Bethune
Léo Andéol
Mathieu Chalvidal
Thomas Serre
FAtt
16
49
0
11 Jun 2023
Two-Stage Holistic and Contrastive Explanation of Image Classification
Two-Stage Holistic and Contrastive Explanation of Image Classification
Weiyan Xie
Xiao-hui Li
Zhi Lin
Leonard K. M. Poon
Caleb Chen Cao
N. Zhang
24
2
0
10 Jun 2023
Efficient GNN Explanation via Learning Removal-based Attribution
Efficient GNN Explanation via Learning Removal-based Attribution
Yao Rong
Guanchu Wang
Qizhang Feng
Ninghao Liu
Zirui Liu
Enkelejda Kasneci
Xia Hu
15
9
0
09 Jun 2023
Multimodal Explainable Artificial Intelligence: A Comprehensive Review
  of Methodological Advances and Future Research Directions
Multimodal Explainable Artificial Intelligence: A Comprehensive Review of Methodological Advances and Future Research Directions
N. Rodis
Christos Sardianos
Panagiotis I. Radoglou-Grammatikis
Panagiotis G. Sarigiannidis
Iraklis Varlamis
Georgios Th. Papadopoulos
25
22
0
09 Jun 2023
Teaching AI to Teach: Leveraging Limited Human Salience Data Into
  Unlimited Saliency-Based Training
Teaching AI to Teach: Leveraging Limited Human Salience Data Into Unlimited Saliency-Based Training
Colton R. Crum
Aidan Boyd
Kevin W. Bowyer
A. Czajka
15
7
0
08 Jun 2023
A Unified Concept-Based System for Local, Global, and Misclassification
  Explanations
A Unified Concept-Based System for Local, Global, and Misclassification Explanations
Fatemeh Aghaeipoor
D. Asgarian
Mohammad Sabokrou
FAtt
24
0
0
06 Jun 2023
G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors
G-CAME: Gaussian-Class Activation Mapping Explainer for Object Detectors
Quoc Khanh Nguyen
Hung Truong Thanh Nguyen
Truong Thanh Hung Nguyen
Van Binh Truong
Quoc Hung Cao
24
4
0
06 Jun 2023
Towards Better Explanations for Object Detection
Towards Better Explanations for Object Detection
Van Binh Truong
Hung Truong Thanh Nguyen
Truong Thanh Hung Nguyen
Quoc Khanh Nguyen
Quoc Hung Cao
11
9
0
05 Jun 2023
Encoding Time-Series Explanations through Self-Supervised Model Behavior
  Consistency
Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency
Owen Queen
Thomas Hartvigsen
Teddy Koker
Huan He
Theodoros Tsiligkaridis
Marinka Zitnik
AI4TS
37
17
0
03 Jun 2023
Discriminative Deep Feature Visualization for Explainable Face
  Recognition
Discriminative Deep Feature Visualization for Explainable Face Recognition
Zewei Xu
Yuhang Lu
Touradj Ebrahimi
FAtt
CVBM
15
7
0
01 Jun 2023
Integrated Decision Gradients: Compute Your Attributions Where the Model
  Makes Its Decision
Integrated Decision Gradients: Compute Your Attributions Where the Model Makes Its Decision
Chase Walker
Sumit Kumar Jha
Kenny Chen
Rickard Ewetz
FAtt
24
9
0
31 May 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
Decom--CAM: Tell Me What You See, In Details! Feature-Level Interpretation via Decomposition Class Activation Map
Yuguang Yang
Runtang Guo
Shen-Te Wu
Yimi Wang
Juan Zhang
Xuan Gong
Baochang Zhang
19
0
0
27 May 2023
Visualizing data augmentation in deep speaker recognition
Visualizing data augmentation in deep speaker recognition
Pengqi Li
Lantian Li
A. Hamdulla
D. Wang
23
3
0
25 May 2023
An Experimental Investigation into the Evaluation of Explainability
  Methods
An Experimental Investigation into the Evaluation of Explainability Methods
Sédrick Stassin
A. Englebert
Géraldin Nanfack
Julien Albert
Nassim Versbraegen
Gilles Peiffer
Miriam Doh
Nicolas Riche
Benoit Frénay
Christophe De Vleeschouwer
XAI
ELM
16
5
0
25 May 2023
Assessment of the Reliablity of a Model's Decision by Generalizing
  Attribution to the Wavelet Domain
Assessment of the Reliablity of a Model's Decision by Generalizing Attribution to the Wavelet Domain
Gabriel Kasmi
L. Dubus
Yves-Marie Saint Drenan
Philippe Blanc
FAtt
18
3
0
24 May 2023
Towards credible visual model interpretation with path attribution
Towards credible visual model interpretation with path attribution
Naveed Akhtar
Muhammad A. A. K. Jalwana
FAtt
22
5
0
23 May 2023
What Symptoms and How Long? An Interpretable AI Approach for Depression
  Detection in Social Media
What Symptoms and How Long? An Interpretable AI Approach for Depression Detection in Social Media
Junwei Kuang
Jiaheng Xie
Zhijun Yan
27
1
0
18 May 2023
Explain Any Concept: Segment Anything Meets Concept-Based Explanation
Explain Any Concept: Segment Anything Meets Concept-Based Explanation
Ao Sun
Pingchuan Ma
Yuanyuan Yuan
Shuai Wang
LLMAG
23
31
0
17 May 2023
Causal Analysis for Robust Interpretability of Neural Networks
Causal Analysis for Robust Interpretability of Neural Networks
Ola Ahmad
Nicolas Béreux
Loïc Baret
V. Hashemi
Freddy Lecue
CML
21
3
0
15 May 2023
Towards Visual Saliency Explanations of Face Verification
Towards Visual Saliency Explanations of Face Verification
Yuhang Lu
Zewei Xu
Touradj Ebrahimi
FAtt
XAI
CVBM
14
10
0
15 May 2023
AURA : Automatic Mask Generator using Randomized Input Sampling for
  Object Removal
AURA : Automatic Mask Generator using Randomized Input Sampling for Object Removal
Changsuk Oh
D. Shim
H. J. Kim
AAML
22
0
0
13 May 2023
Human Attention-Guided Explainable Artificial Intelligence for Computer
  Vision Models
Human Attention-Guided Explainable Artificial Intelligence for Computer Vision Models
Guoyang Liu
Jindi Zhang
Antoni B. Chan
J. H. Hsiao
24
14
0
05 May 2023
Interpreting Vision and Language Generative Models with Semantic Visual
  Priors
Interpreting Vision and Language Generative Models with Semantic Visual Priors
Michele Cafagna
L. Rojas-Barahona
Kees van Deemter
Albert Gatt
FAtt
VLM
17
1
0
28 Apr 2023
Categorical Foundations of Explainable AI: A Unifying Theory
Categorical Foundations of Explainable AI: A Unifying Theory
Pietro Barbiero
S. Fioravanti
Francesco Giannini
Alberto Tonda
Pietro Lio'
Elena Di Lavore
XAI
21
2
0
27 Apr 2023
Are Explainability Tools Gender Biased? A Case Study on Face
  Presentation Attack Detection
Are Explainability Tools Gender Biased? A Case Study on Face Presentation Attack Detection
Marco Huber
Meiling Fang
Fadi Boutros
Naser Damer
FaML
CVBM
24
9
0
26 Apr 2023
Efficient Explainable Face Verification based on Similarity Score
  Argument Backpropagation
Efficient Explainable Face Verification based on Similarity Score Argument Backpropagation
Marco Huber
An Luu
Philipp Terhörst
Naser Damer
CVBM
AAML
30
14
0
26 Apr 2023
Learning Bottleneck Concepts in Image Classification
Learning Bottleneck Concepts in Image Classification
Bowen Wang
Liangzhi Li
Yuta Nakashima
Hajime Nagahara
SSL
20
42
0
20 Apr 2023
Explanations of Black-Box Models based on Directional Feature
  Interactions
Explanations of Black-Box Models based on Directional Feature Interactions
A. Masoomi
Davin Hill
Zhonghui Xu
C. Hersh
E. Silverman
P. Castaldi
Stratis Ioannidis
Jennifer Dy
FAtt
29
17
0
16 Apr 2023
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with
  Differentiable Patch Masking
VISION DIFFMASK: Faithful Interpretation of Vision Transformers with Differentiable Patch Masking
A. Nalmpantis
Apostolos Panagiotopoulos
John Gkountouras
Konstantinos Papakostas
Wilker Aziz
15
4
0
13 Apr 2023
ODAM: Gradient-based instance-specific visual explanations for object
  detection
ODAM: Gradient-based instance-specific visual explanations for object detection
Chenyang Zhao
Antoni B. Chan
FAtt
19
8
0
13 Apr 2023
Explanation of Face Recognition via Saliency Maps
Explanation of Face Recognition via Saliency Maps
Yuhang Lu
Touradj Ebrahimi
XAI
CVBM
13
3
0
12 Apr 2023
Gradient-based Uncertainty Attribution for Explainable Bayesian Deep
  Learning
Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning
Hanjing Wang
D. Joshi
Shiqiang Wang
Q. Ji
UQCV
BDL
16
6
0
10 Apr 2023
Explanation Strategies for Image Classification in Humans vs. Current
  Explainable AI
Explanation Strategies for Image Classification in Humans vs. Current Explainable AI
Ruoxi Qi
Yueyuan Zheng
Yi Yang
Caleb Chen Cao
J. H. Hsiao
25
5
0
10 Apr 2023
Towards Self-Explainability of Deep Neural Networks with Heatmap
  Captioning and Large-Language Models
Towards Self-Explainability of Deep Neural Networks with Heatmap Captioning and Large-Language Models
Osman Tursun
Simon Denman
S. Sridharan
Clinton Fookes
ViT
VLM
16
6
0
05 Apr 2023
Fine-tuning of explainable CNNs for skin lesion classification based on
  dermatologists' feedback towards increasing trust
Fine-tuning of explainable CNNs for skin lesion classification based on dermatologists' feedback towards increasing trust
Md Abdul Kadir
Fabrizio Nunnari
Daniel Sonntag
FAtt
11
1
0
03 Apr 2023
Previous
123...567...121314
Next