ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 651 papers shown
Title
Evaluating Model Explanations without Ground Truth
Evaluating Model Explanations without Ground Truth
Kaivalya Rawal
Zihao Fu
Eoin Delaney
Chris Russell
FAtt
XAI
31
0
0
15 May 2025
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
DocVXQA: Context-Aware Visual Explanations for Document Question Answering
Mohamed Ali Souibgui
Changkyu Choi
Andrey Barsky
Kangsoo Jung
Ernest Valveny
Dimosthenis Karatzas
25
0
0
12 May 2025
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
Moritz Vandenhirtz
Julia E. Vogt
38
0
0
09 May 2025
Explainable Face Recognition via Improved Localization
Explainable Face Recognition via Improved Localization
Rashik Shadman
Daqing Hou
Faraz Hussain
M. G. Sarwar Murshed
CVBM
FAtt
26
0
0
04 May 2025
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
ABE: A Unified Framework for Robust and Faithful Attribution-Based Explainability
Zhiyu Zhu
Jiayu Zhang
Zhibo Jin
Fang Chen
Jianlong Zhou
FAtt
24
0
0
03 May 2025
Computational Identification of Regulatory Statements in EU Legislation
Computational Identification of Regulatory Statements in EU Legislation
Gijs Jan Brandsma
Jens Blom-Hansen
Christiaan Meijer
Kody Moodley
AILaw
51
0
0
01 May 2025
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
ODExAI: A Comprehensive Object Detection Explainable AI Evaluation
Loc Phuc Truong Nguyen
Hung Truong Thanh Nguyen
Hung Cao
66
0
0
27 Apr 2025
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
What Makes for a Good Saliency Map? Comparing Strategies for Evaluating Saliency Maps in Explainable AI (XAI)
Felix Kares
Timo Speith
Hanwei Zhang
Markus Langer
FAtt
XAI
38
0
0
23 Apr 2025
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Towards Spatially-Aware and Optimally Faithful Concept-Based Explanations
Shubham Kumar
Dwip Dalal
Narendra Ahuja
21
0
0
15 Apr 2025
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations
Indu Panigrahi
Sunnie S. Y. Kim
Amna Liaqat
Rohan Jinturkar
Olga Russakovsky
Ruth C. Fong
Parastoo Abtahi
FAtt
HAI
57
0
0
14 Apr 2025
Metric-Guided Synthesis of Class Activation Mapping
Metric-Guided Synthesis of Class Activation Mapping
Alejandro Luque-Cerpa
Elizabeth Polgreen
Ajitha Rajan
Hazem Torfah
27
0
0
14 Apr 2025
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Are We Merely Justifying Results ex Post Facto? Quantifying Explanatory Inversion in Post-Hoc Model Explanations
Zhen Tan
Song Wang
Yifan Li
Yu Kong
Jundong Li
Tianlong Chen
Huan Liu
FAtt
43
0
0
11 Apr 2025
Uncovering the Structure of Explanation Quality with Spectral Analysis
Uncovering the Structure of Explanation Quality with Spectral Analysis
Johannes Maeß
G. Montavon
Shinichi Nakajima
Klaus-Robert Müller
Thomas Schnake
FAtt
40
0
0
11 Apr 2025
Attention IoU: Examining Biases in CelebA using Attention Maps
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
42
0
0
25 Mar 2025
Dynamic Accumulated Attention Map for Interpreting Evolution of Decision-Making in Vision Transformer
Dynamic Accumulated Attention Map for Interpreting Evolution of Decision-Making in Vision Transformer
Yi Liao
Yongsheng Gao
Weichuan Zhang
44
1
0
18 Mar 2025
Where do Large Vision-Language Models Look at when Answering Questions?
Where do Large Vision-Language Models Look at when Answering Questions?
X. Xing
Chia-Wen Kuo
Li Fuxin
Yulei Niu
Fan Chen
Ming Li
Ying Wu
Longyin Wen
Sijie Zhu
LRM
58
0
0
18 Mar 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
100
1
0
13 Mar 2025
A Siamese Network to Detect If Two Iris Images Are Monozygotic
A Siamese Network to Detect If Two Iris Images Are Monozygotic
Yongle Yuan
Kevin W. Bowyer
58
0
0
12 Mar 2025
i-WiViG: Interpretable Window Vision GNN
Ivica Obadic
D. Kangin
Dario Augusto Borges Oliveira
Plamen Angelov
Xiao Xiang Zhu
59
0
0
11 Mar 2025
Now you see me! A framework for obtaining class-relevant saliency maps
Nils Philipp Walter
Jilles Vreeken
Jonas Fischer
FAtt
40
0
0
10 Mar 2025
Attention, Please! PixelSHAP Reveals What Vision-Language Models Actually Focus On
Roni Goldshmidt
MLLM
VLM
44
0
0
09 Mar 2025
FW-Shapley: Real-time Estimation of Weighted Shapley Values
Pranoy Panda
Siddharth Tandon
V. Balasubramanian
TDI
65
0
0
09 Mar 2025
Towards Locally Explaining Prediction Behavior via Gradual Interventions and Measuring Property Gradients
Niklas Penzel
Joachim Denzler
FAtt
50
0
0
07 Mar 2025
QPM: Discrete Optimization for Globally Interpretable Image Classification
QPM: Discrete Optimization for Globally Interpretable Image Classification
Thomas Norrenbrock
T. Kaiser
Sovan Biswas
R. Manuvinakurike
Bodo Rosenhahn
55
0
0
27 Feb 2025
Walking the Web of Concept-Class Relationships in Incrementally Trained Interpretable Models
Walking the Web of Concept-Class Relationships in Incrementally Trained Interpretable Models
Susmit Agrawal
Deepika Vemuri
S. Paul
Vineeth N. Balasubramanian
CLL
67
0
0
27 Feb 2025
Grad-ECLIP: Gradient-based Visual and Textual Explanations for CLIP
Grad-ECLIP: Gradient-based Visual and Textual Explanations for CLIP
Chenyang Zhao
Kun Wang
J. H. Hsiao
Antoni B. Chan
CLIP
71
0
0
26 Feb 2025
Model Lakes
Model Lakes
Koyena Pal
David Bau
Renée J. Miller
67
0
0
24 Feb 2025
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Disentangling Visual Transformers: Patch-level Interpretability for Image Classification
Guillaume Jeanneret
Loïc Simon
F. Jurie
ViT
49
0
0
24 Feb 2025
Analyzing Factors Influencing Driver Willingness to Accept Advanced Driver Assistance Systems
Hannah Musau
Nana Kankam Gyimah
Judith Mwakalonge
G. Comert
Saidi Siuhi
43
0
0
23 Feb 2025
SPEX: Scaling Feature Interaction Explanations for LLMs
SPEX: Scaling Feature Interaction Explanations for LLMs
J. S. Kang
Landon Butler
Abhineet Agarwal
Y. E. Erginbas
Ramtin Pedarsani
Kannan Ramchandran
Bin Yu
VLM
LRM
72
0
0
20 Feb 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
147
0
0
17 Feb 2025
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability
Zhiyu Zhu
Zhibo Jin
Jiayu Zhang
Nan Yang
Jiahao Huang
Jianlong Zhou
Fang Chen
41
0
0
16 Feb 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
100
0
0
11 Feb 2025
Discovering Chunks in Neural Embeddings for Interpretability
Discovering Chunks in Neural Embeddings for Interpretability
Shuchen Wu
Stephan Alaniz
Eric Schulz
Zeynep Akata
42
0
0
03 Feb 2025
Generating visual explanations from deep networks using implicit neural representations
Generating visual explanations from deep networks using implicit neural representations
Michal Byra
Henrik Skibbe
GAN
FAtt
29
0
0
20 Jan 2025
COMIX: Compositional Explanations using Prototypes
COMIX: Compositional Explanations using Prototypes
S. Sivaprasad
D. Kangin
Plamen Angelov
Mario Fritz
139
0
0
10 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
Multi-Head Explainer: A General Framework to Improve Explainability in CNNs and Transformers
Multi-Head Explainer: A General Framework to Improve Explainability in CNNs and Transformers
Bohang Sun
Pietro Liò
ViT
AAML
40
1
0
02 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
94
0
0
30 Dec 2024
Attribution for Enhanced Explanation with Transferable Adversarial
  eXploration
Attribution for Enhanced Explanation with Transferable Adversarial eXploration
Zhiyu Zhu
Jiayu Zhang
Zhibo Jin
Huaming Chen
Jianlong Zhou
Fang Chen
AAML
ViT
38
0
0
27 Dec 2024
A Review of Multimodal Explainable Artificial Intelligence: Past,
  Present and Future
A Review of Multimodal Explainable Artificial Intelligence: Past, Present and Future
Shilin Sun
Wenbin An
Feng Tian
Fang Nan
Qidong Liu
J. Liu
N. Shah
Ping Chen
93
2
0
18 Dec 2024
Beyond Accuracy: On the Effects of Fine-tuning Towards Vision-Language Model's Prediction Rationality
Beyond Accuracy: On the Effects of Fine-tuning Towards Vision-Language Model's Prediction Rationality
Qitong Wang
Tang Li
Kien X. Nguyen
Xi Peng
85
0
0
17 Dec 2024
Advancing Attribution-Based Neural Network Explainability through
  Relative Absolute Magnitude Layer-Wise Relevance Propagation and
  Multi-Component Evaluation
Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation
Davor Vukadin
Petar Afrić
Marin Šilić
Goran Delač
FAtt
93
2
0
12 Dec 2024
Explainable and Interpretable Multimodal Large Language Models: A
  Comprehensive Survey
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey
Yunkai Dang
Kaichen Huang
Jiahao Huo
Yibo Yan
S. Huang
...
Kun Wang
Yong Liu
Jing Shao
Hui Xiong
Xuming Hu
LRM
101
14
0
03 Dec 2024
Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside
  CNN Models
Neuron Abandoning Attention Flow: Visual Explanation of Dynamics inside CNN Models
Yi Liao
Yongsheng Gao
Weichuan Zhang
74
0
0
02 Dec 2024
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability
Explaining the Unexplained: Revealing Hidden Correlations for Better Interpretability
Wen-Dong Jiang
Chih-Yung Chang
Show-Jane Yen
Diptendu Sinha Roy
FAtt
HAI
67
1
0
02 Dec 2024
Explaining Object Detectors via Collective Contribution of Pixels
Explaining Object Detectors via Collective Contribution of Pixels
Toshinori Yamauchi
Hiroshi Kera
K. Kawamoto
ObjD
FAtt
66
1
0
01 Dec 2024
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
89
0
0
29 Nov 2024
Transparent Neighborhood Approximation for Text Classifier Explanation
Transparent Neighborhood Approximation for Text Classifier Explanation
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AAML
64
0
0
25 Nov 2024
Interpreting Object-level Foundation Models via Visual Precision Search
Interpreting Object-level Foundation Models via Visual Precision Search
Ruoyu Chen
Siyuan Liang
Jingzhi Li
Shiming Liu
Maosen Li
Zheng Huang
Hua Zhang
Xiaochun Cao
FAtt
82
4
0
25 Nov 2024
1234...121314
Next