ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07421
  4. Cited By
RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

19 June 2018
Vitali Petsiuk
Abir Das
Kate Saenko
    FAtt
ArXivPDFHTML

Papers citing "RISE: Randomized Input Sampling for Explanation of Black-box Models"

50 / 651 papers shown
Title
Verifying Machine Unlearning with Explainable AI
Verifying Machine Unlearning with Explainable AI
Àlex Pujol Vidal
A. S. Johansen
M. N. Jahromi
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
MU
64
1
0
20 Nov 2024
Local vs distributed representations: What is the right basis for
  interpretability?
Local vs distributed representations: What is the right basis for interpretability?
Julien Colin
L. Goetschalckx
Thomas Fel
Victor Boutin
Jay Gopal
Thomas Serre
Nuria Oliver
HAI
34
2
0
06 Nov 2024
Explanations that reveal all through the definition of encoding
Explanations that reveal all through the definition of encoding
A. Puli
Nhi Nguyen
Rajesh Ranganath
FAtt
XAI
36
1
0
04 Nov 2024
Benchmarking XAI Explanations with Human-Aligned Evaluations
Benchmarking XAI Explanations with Human-Aligned Evaluations
Rémi Kazmierczak
Steve Azzolin
Eloise Berthier
Anna Hedström
Patricia Delhomme
...
Goran Frehse
Massimiliano Mancini
Baptiste Caramiaux
Andrea Passerini
Gianni Franchi
23
1
0
04 Nov 2024
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
SPES: Spectrogram Perturbation for Explainable Speech-to-Text Generation
Dennis Fucci
Marco Gaido
Beatrice Savoldi
Matteo Negri
Mauro Cettolo
L. Bentivogli
54
1
0
03 Nov 2024
On the Black-box Explainability of Object Detection Models for Safe and
  Trustworthy Industrial Applications
On the Black-box Explainability of Object Detection Models for Safe and Trustworthy Industrial Applications
Alain Andres
Aitor Martinez-Seras
I. Laña
Javier Del Ser
AAML
35
1
0
28 Oct 2024
Increasing Interpretability of Neural Networks By Approximating Human
  Visual Saliency
Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency
Aidan Boyd
M. Trabelsi
H. Uzunalioglu
Dan Kushnir
FAtt
26
0
0
21 Oct 2024
Reproducibility study of "LICO: Explainable Models with Language-Image
  Consistency"
Reproducibility study of "LICO: Explainable Models with Language-Image Consistency"
Luan Fletcher
Robert van der Klis
Martin Sedláček
Stefan Vasilev
Christos Athanasiadis
27
1
0
17 Oct 2024
Bilinear MLPs enable weight-based mechanistic interpretability
Bilinear MLPs enable weight-based mechanistic interpretability
Michael T. Pearce
Thomas Dooms
Alice Rigg
José Oramas
Lee Sharkey
21
4
0
10 Oct 2024
Audio Explanation Synthesis with Generative Foundation Models
Audio Explanation Synthesis with Generative Foundation Models
Alican Akman
Qiyang Sun
Björn W. Schuller
20
1
0
10 Oct 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
42
0
0
10 Oct 2024
Riemann Sum Optimization for Accurate Integrated Gradients Computation
Riemann Sum Optimization for Accurate Integrated Gradients Computation
Swadesh Swain
Shree Singhi
23
0
0
05 Oct 2024
Self-eXplainable AI for Medical Image Analysis: A Survey and New
  Outlooks
Self-eXplainable AI for Medical Image Analysis: A Survey and New Outlooks
Junlin Hou
Sicen Liu
Yequan Bie
Hongmei Wang
Andong Tan
Luyang Luo
Hao Chen
XAI
25
3
0
03 Oct 2024
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng
Farhad Shirani
Zhuomin Chen
Chaohao Lin
Wei Cheng
Wenbo Guo
Dongsheng Luo
AAML
28
0
0
03 Oct 2024
Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations
Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations
Nick Jiang
Anish Kachinthaya
Suzie Petryk
Yossi Gandelsman
VLM
34
14
0
03 Oct 2024
One Wave to Explain Them All: A Unifying Perspective on Post-hoc
  Explainability
One Wave to Explain Them All: A Unifying Perspective on Post-hoc Explainability
Gabriel Kasmi
Amandine Brunetto
Thomas Fel
Jayneel Parekh
AAML
FAtt
25
0
0
02 Oct 2024
Interactive Explainable Anomaly Detection for Industrial Settings
Interactive Explainable Anomaly Detection for Industrial Settings
Daniel Gramelt
Timon Höfer
Ute Schmid
AAML
HAI
26
1
0
01 Oct 2024
PCEvE: Part Contribution Evaluation Based Model Explanation for Human
  Figure Drawing Assessment and Beyond
PCEvE: Part Contribution Evaluation Based Model Explanation for Human Figure Drawing Assessment and Beyond
Jongseo Lee
Geo Ahn
Seong Tae Kim
Jinwoo Choi
36
0
0
26 Sep 2024
The Overfocusing Bias of Convolutional Neural Networks: A
  Saliency-Guided Regularization Approach
The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach
David Bertoin
Eduardo Hugo Sanchez
Mehdi Zouitine
Emmanuel Rachelson
34
0
0
25 Sep 2024
Deep Learning for Precision Agriculture: Post-Spraying Evaluation and
  Deposition Estimation
Deep Learning for Precision Agriculture: Post-Spraying Evaluation and Deposition Estimation
Harry Rogers
Tahmina Zebin
Grzegorz Cielniak
Beatriz De La Iglesia
Ben Magri
21
0
0
24 Sep 2024
Interpret the Predictions of Deep Networks via Re-Label Distillation
Interpret the Predictions of Deep Networks via Re-Label Distillation
Yingying Hua
Shiming Ge
Daichi Zhang
FAtt
22
0
0
20 Sep 2024
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable
  Approach
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable Approach
Debarpan Bhattacharya
A. H. Poorjam
Deepak Mittal
S. Ganapathy
27
0
0
17 Sep 2024
Optimal ablation for interpretability
Optimal ablation for interpretability
Maximilian Li
Lucas Janson
FAtt
47
2
0
16 Sep 2024
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
InfoDisent: Explainability of Image Classification Models by Information Disentanglement
Łukasz Struski
Dawid Rymarczyk
Jacek Tabor
56
0
0
16 Sep 2024
Integrated Multi-Level Knowledge Distillation for Enhanced Speaker
  Verification
Integrated Multi-Level Knowledge Distillation for Enhanced Speaker Verification
Wenhao Yang
Jianguo Wei
Wenhuan Lu
Xugang Lu
Lei Li
28
0
0
14 Sep 2024
Data Augmentation for Image Classification using Generative AI
Data Augmentation for Image Classification using Generative AI
Fazle Rahat
M Shifat Hossain
Md Rubel Ahmed
Sumit Kumar Jha
Rickard Ewetz
VLM
49
4
0
31 Aug 2024
Perturbation on Feature Coalition: Towards Interpretable Deep Neural
  Networks
Perturbation on Feature Coalition: Towards Interpretable Deep Neural Networks
Xuran Hu
Mingzhe Zhu
Zhenpeng Feng
Miloš Daković
Ljubiša Stanković
15
0
0
23 Aug 2024
LCE: A Framework for Explainability of DNNs for Ultrasound Image Based
  on Concept Discovery
LCE: A Framework for Explainability of DNNs for Ultrasound Image Based on Concept Discovery
Weiji Kong
Xun Gong
Juan Wang
21
1
0
19 Aug 2024
An Explainable Non-local Network for COVID-19 Diagnosis
An Explainable Non-local Network for COVID-19 Diagnosis
Jingfu Yang
Peng Huang
Jing Hu
Shu Hu
Siwei Lyu
Xin Wang
Jun Guo
Xi Wu
34
1
0
08 Aug 2024
Human-inspired Explanations for Vision Transformers and Convolutional
  Neural Networks
Human-inspired Explanations for Vision Transformers and Convolutional Neural Networks
Mahadev Prasad Panda
Matteo Tiezzi
Martina Vilas
Gemma Roig
Bjoern M. Eskofier
Dario Zanca
ViT
AAML
39
1
0
04 Aug 2024
Space-scale Exploration of the Poor Reliability of Deep Learning Models:
  the Case of the Remote Sensing of Rooftop Photovoltaic Systems
Space-scale Exploration of the Poor Reliability of Deep Learning Models: the Case of the Remote Sensing of Rooftop Photovoltaic Systems
Gabriel Kasmi
L. Dubus
Yves-Marie Saint Drenan
Philippe Blanc
35
0
0
31 Jul 2024
Faithful and Plausible Natural Language Explanations for Image Classification: A Pipeline Approach
Faithful and Plausible Natural Language Explanations for Image Classification: A Pipeline Approach
Adam Wojciechowski
Mateusz Lango
Ondrej Dusek
FAtt
41
0
0
30 Jul 2024
Mean Opinion Score as a New Metric for User-Evaluation of XAI Methods
Mean Opinion Score as a New Metric for User-Evaluation of XAI Methods
Hyeon Yu
Jenny Benois-Pineau
Romain Bourqui
R. Giot
Alexey Zhukov
28
0
0
29 Jul 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Jie Song
XAI
46
0
0
28 Jul 2024
Comprehensive Attribution: Inherently Explainable Vision Model with
  Feature Detector
Comprehensive Attribution: Inherently Explainable Vision Model with Feature Detector
Xianren Zhang
Dongwon Lee
Suhang Wang
VLM
FAtt
45
3
0
27 Jul 2024
Dissecting Multiplication in Transformers: Insights into LLMs
Dissecting Multiplication in Transformers: Insights into LLMs
Luyu Qiu
Jianing Li
Chi Su
C. Zhang
Lei Chen
32
3
0
22 Jul 2024
Mechanistically Interpreting a Transformer-based 2-SAT Solver: An
  Axiomatic Approach
Mechanistically Interpreting a Transformer-based 2-SAT Solver: An Axiomatic Approach
Nils Palumbo
Ravi Mangal
Zifan Wang
Saranya Vijayakumar
Corina S. Pasareanu
Somesh Jha
41
1
0
18 Jul 2024
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI
Qi Huang
Emanuele Mezzi
Osman Mutlu
Miltiadis Kofinas
Vidya Prasad
Shadnan Azwad Khan
Elena Ranguelova
N. V. Stein
45
0
0
17 Jul 2024
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps
I2AM: Interpreting Image-to-Image Latent Diffusion Models via Bi-Attribution Maps
Junseo Park
Hyeryung Jang
75
0
0
17 Jul 2024
Beyond Spatial Explanations: Explainable Face Recognition in the
  Frequency Domain
Beyond Spatial Explanations: Explainable Face Recognition in the Frequency Domain
Marco Huber
Naser Damer
CVBM
30
1
0
16 Jul 2024
Benchmarking the Attribution Quality of Vision Models
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
34
3
0
16 Jul 2024
XEdgeAI: A Human-centered Industrial Inspection Framework with
  Data-centric Explainable Edge AI Approach
XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach
Truong Thanh Hung Nguyen
Phuc Truong Loc Nguyen
Hung Cao
24
2
0
16 Jul 2024
Layer-Wise Relevance Propagation with Conservation Property for ResNet
Layer-Wise Relevance Propagation with Conservation Property for ResNet
Seitaro Otsuki
T. Iida
Félix Doublet
Tsubasa Hirakawa
Takayoshi Yamashita
H. Fujiyoshi
Komei Sugiura
FAtt
48
4
0
12 Jul 2024
Understanding Visual Feature Reliance through the Lens of Complexity
Understanding Visual Feature Reliance through the Lens of Complexity
Thomas Fel
Louis Bethune
Andrew Kyle Lampinen
Thomas Serre
Katherine Hermann
FAtt
CoGe
32
6
0
08 Jul 2024
Towards A Comprehensive Visual Saliency Explanation Framework for
  AI-based Face Recognition Systems
Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems
Yuhang Lu
Zewei Xu
Touradj Ebrahimi
CVBM
FAtt
XAI
44
3
0
08 Jul 2024
Explainable Image Recognition via Enhanced Slot-attention Based
  Classifier
Explainable Image Recognition via Enhanced Slot-attention Based Classifier
Bowen Wang
Liangzhi Li
Jiahao Zhang
Yuta Nakashima
Hajime Nagahara
OCL
44
0
0
08 Jul 2024
SLIM: Spuriousness Mitigation with Minimal Human Annotations
SLIM: Spuriousness Mitigation with Minimal Human Annotations
Xiwei Xuan
Ziquan Deng
Hsuan-Tien Lin
Kwan-Liu Ma
42
2
0
08 Jul 2024
Explainable AI: Comparative Analysis of Normal and Dilated ResNet Models
  for Fundus Disease Classification
Explainable AI: Comparative Analysis of Normal and Dilated ResNet Models for Fundus Disease Classification
P. N. Karthikayan
Yoga Sri Varshan V
Hitesh Gupta Kattamuri
Umarani Jayaraman
MedIm
21
2
0
07 Jul 2024
Regulating Model Reliance on Non-Robust Features by Smoothing Input
  Marginal Density
Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density
Peiyu Yang
Naveed Akhtar
Mubarak Shah
Ajmal Saeed Mian
AAML
31
1
0
05 Jul 2024
Integrated feature analysis for deep learning interpretation and class
  activation maps
Integrated feature analysis for deep learning interpretation and class activation maps
Yanli Li
Tahereh Hassanzadeh
D. Shamonin
Monique Reijnierse
A. H. V. D. H. Mil
B. Stoel
38
0
0
01 Jul 2024
Previous
12345...121314
Next