Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1910.08485
Cited By
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
18 October 2019
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Understanding Deep Networks via Extremal Perturbations and Smooth Masks"
50 / 75 papers shown
Title
Attention IoU: Examining Biases in CelebA using Attention Maps
Aaron Serianni
Tyler Zhu
Olga Russakovsky
V. V. Ramaswamy
39
0
0
25 Mar 2025
Model Lakes
Koyena Pal
David Bau
Renée J. Miller
63
0
0
24 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
105
7
0
06 Feb 2025
Generating visual explanations from deep networks using implicit neural representations
Michal Byra
Henrik Skibbe
GAN
FAtt
29
0
0
20 Jan 2025
Layerwise Change of Knowledge in Neural Networks
Xu Cheng
Lei Cheng
Zhaoran Peng
Yang Xu
Tian Han
Quanshi Zhang
KELM
FAtt
33
6
0
13 Sep 2024
Human-inspired Explanations for Vision Transformers and Convolutional Neural Networks
Mahadev Prasad Panda
Matteo Tiezzi
Martina Vilas
Gemma Roig
Bjoern M. Eskofier
Dario Zanca
ViT
AAML
29
1
0
04 Aug 2024
Interpreting Low-level Vision Models with Causal Effect Maps
Jinfan Hu
Jinjin Gu
Shiyao Yu
Fanghua Yu
Zheyuan Li
Zhiyuan You
Chaochao Lu
Chao Dong
CML
48
2
0
29 Jul 2024
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
36
2
0
11 Jun 2024
Made to Order: Discovering monotonic temporal changes via self-supervised video ordering
Charig Yang
Weidi Xie
Andrew Zisserman
34
1
0
25 Apr 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Explainable Multi-Camera 3D Object Detection with Transformer-Based Saliency Maps
Till Beemelmanns
Wassim Zahr
Lutz Eckstein
27
0
0
22 Dec 2023
Occlusion Sensitivity Analysis with Augmentation Subspace Perturbation in Deep Feature Space
Pedro Valois
Koichiro Niinuma
Kazuhiro Fukui
AAML
24
4
0
25 Nov 2023
Zero-shot Translation of Attention Patterns in VQA Models to Natural Language
Leonard Salewski
A. Sophia Koepke
Hendrik P. A. Lensch
Zeynep Akata
29
2
0
08 Nov 2023
Multiple Different Black Box Explanations for Image Classifiers
Hana Chockler
D. A. Kelly
Daniel Kroening
FAtt
19
0
0
25 Sep 2023
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
80
7
0
14 Sep 2023
Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution
T. Idé
Naoki Abe
33
4
0
09 Aug 2023
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
Usha Bhalla
Suraj Srinivas
Himabindu Lakkaraju
FAtt
CML
23
6
0
27 Jul 2023
DeepMediX: A Deep Learning-Driven Resource-Efficient Medical Diagnosis Across the Spectrum
Kishore Babu Nampalle
Pradeep Singh
Vivek Narayan Uppala
Balasubramanian Raman
MedIm
19
2
0
01 Jul 2023
Decom--CAM: Tell Me What You See, In Details! Feature-Level Interpretation via Decomposition Class Activation Map
Yuguang Yang
Runtang Guo
Shen-Te Wu
Yimi Wang
Juan Zhang
Xuan Gong
Baochang Zhang
17
0
0
27 May 2023
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
27
0
0
27 Mar 2023
ICICLE: Interpretable Class Incremental Continual Learning
Dawid Rymarczyk
Joost van de Weijer
Bartosz Zieliñski
Bartlomiej Twardowski
CLL
24
28
0
14 Mar 2023
A Lifted Bregman Formulation for the Inversion of Deep Neural Networks
Xiaoyu Wang
Martin Benning
28
2
0
01 Mar 2023
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts
Mikolaj Sacha
Dawid Rymarczyk
Lukasz Struski
Jacek Tabor
Bartosz Zieliñski
VLM
30
29
0
28 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
28
22
0
17 Jan 2023
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
27
18
0
27 Nov 2022
Explaining Image Classifiers with Multiscale Directional Image Representation
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
23
4
0
22 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
21
17
0
06 Nov 2022
PlanT: Explainable Planning Transformers via Object-Level Representations
Katrin Renz
Kashyap Chitta
Otniel-Bogdan Mercea
A. Sophia Koepke
Zeynep Akata
Andreas Geiger
ViT
33
94
0
25 Oct 2022
Object-ABN: Learning to Generate Sharp Attention Maps for Action Recognition
Tomoya Nitta
Tsubasa Hirakawa
H. Fujiyoshi
Toru Tamaki
52
0
0
27 Jul 2022
Adaptive occlusion sensitivity analysis for visually explaining video recognition networks
Tomoki Uchiyama
Naoya Sogi
S. Iizuka
Koichiro Niinuma
Kazuhiro Fukui
13
2
0
26 Jul 2022
Explanation-based Counterfactual Retraining(XCR): A Calibration Method for Black-box Models
Liu Zhendong
Wenyu Jiang
Yan Zhang
Chongjun Wang
CML
6
0
0
22 Jun 2022
Spatial-temporal Concept based Explanation of 3D ConvNets
Yi Ji
Yu Wang
K. Mori
Jien Kato
3DPC
FAtt
16
7
0
09 Jun 2022
Dual Decomposition of Convex Optimization Layers for Consistent Attention in Medical Images
Tom Ron
M. Weiler-Sagie
Tamir Hazan
FAtt
MedIm
19
6
0
06 Jun 2022
On the Eigenvalues of Global Covariance Pooling for Fine-grained Visual Recognition
Yue Song
N. Sebe
Wei Wang
18
33
0
26 May 2022
What You See is What You Classify: Black Box Attributions
Steven Stalder
Nathanael Perraudin
R. Achanta
F. Pérez-Cruz
Michele Volpi
FAtt
24
9
0
23 May 2022
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
8
2
0
22 May 2022
It Takes Two Flints to Make a Fire: Multitask Learning of Neural Relation and Explanation Classifiers
Zheng Tang
Mihai Surdeanu
16
6
0
25 Apr 2022
CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations
Leonard Salewski
A. Sophia Koepke
Hendrik P. A. Lensch
Zeynep Akata
LRM
NAI
25
20
0
05 Apr 2022
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
25
25
0
25 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
48
15
0
23 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
58
114
0
06 Dec 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
24
23
0
09 Nov 2021
Gradient Frequency Modulation for Visually Explaining Video Understanding Models
Xinmiao Lin
Wentao Bao
Matthew Wright
Yu Kong
FAtt
AAML
22
2
0
01 Nov 2021
Explaining Latent Representations with a Corpus of Examples
Jonathan Crabbé
Zhaozhi Qian
F. Imrie
M. Schaar
FAtt
16
37
0
28 Oct 2021
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
21
7
0
22 Oct 2021
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
21
14
0
16 Oct 2021
Consistent Explanations by Contrastive Learning
Vipin Pillai
Soroush Abbasi Koohpayegani
Ashley Ouligian
Dennis Fong
Hamed Pirsiavash
FAtt
18
21
0
01 Oct 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
16
76
0
23 Sep 2021
PACE: Posthoc Architecture-Agnostic Concept Extractor for Explaining CNNs
V. Kamakshi
Uday Gupta
N. C. Krishnan
8
18
0
31 Aug 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
32
3
0
25 Aug 2021
1
2
Next