Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2104.00031
Cited By
NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization
31 March 2021
Tien-Ju Yang
Yi-Lun Liao
Vivienne Sze
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization"
41 / 41 papers shown
Title
Pixel-level Certified Explanations via Randomized Smoothing
Alaa Anani
Tobias Lorenz
Mario Fritz
Bernt Schiele
FAtt
AAML
41
0
0
18 Jun 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
184
3
0
28 Jan 2025
Learning local discrete features in explainable-by-design convolutional neural networks
Pantelis I. Kaplanoglou
Konstantinos Diamantaras
FAtt
99
1
0
31 Oct 2024
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
262
0
0
10 Oct 2024
Benchmarking the Attribution Quality of Vision Models
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
FAtt
87
3
0
16 Jul 2024
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
72
1
0
08 Jun 2024
How Video Meetings Change Your Expression
Sumit Sarin
Utkarsh Mall
Purva Tendulkar
Carl Vondrick
CVBM
93
0
0
03 Jun 2024
Towards Explaining Hypercomplex Neural Networks
Eleonora Lopez
Eleonora Grassucci
D. Capriotti
Danilo Comminiello
98
3
0
26 Mar 2024
Explainable Transformer Prototypes for Medical Diagnoses
Ugur Demir
Debesh Jha
Zheyu Zhang
Elif Keles
Bradley Allen
Aggelos K. Katsaggelos
Ulas Bagci
MedIm
35
3
0
11 Mar 2024
3VL: Using Trees to Improve Vision-Language Models' Interpretability
Nir Yellinek
Leonid Karlinsky
Raja Giryes
CoGe
VLM
296
3
0
28 Dec 2023
Explainability of Vision Transformers: A Comprehensive Review and New Perspectives
Rojina Kashefi
Leili Barekatain
Mohammad Sabokrou
Fatemeh Aghaeipoor
ViT
105
10
0
12 Nov 2023
Greedy PIG: Adaptive Integrated Gradients
Kyriakos Axiotis
Sami Abu-El-Haija
Lin Chen
Matthew Fahrbach
Gang Fu
FAtt
60
0
0
10 Nov 2023
A Framework for Interpretability in Machine Learning for Medical Imaging
Alan Q. Wang
Batuhan K. Karaman
Heejong Kim
Jacob Rosenthal
Rachit Saluja
Sean I. Young
M. Sabuncu
AI4CE
128
13
0
02 Oct 2023
From Classification to Segmentation with Explainable AI: A Study on Crack Detection and Growth Monitoring
Florent Forest
Hugo Porta
D. Tuia
Olga Fink
85
11
0
20 Sep 2023
On Model Explanations with Transferable Neural Pathways
Xinmiao Lin
Wentao Bao
Qi Yu
Yu Kong
37
0
0
18 Sep 2023
Text-to-Image Models for Counterfactual Explanations: a Black-Box Approach
Guillaume Jeanneret
Loïc Simon
Frédéric Jurie
DiffM
95
13
0
14 Sep 2023
PDiscoNet: Semantically consistent part discovery for fine-grained recognition
Robert van der Klis
Stephan Alaniz
Massimiliano Mancini
C. Dantas
Dino Ienco
Zeynep Akata
Diego Marcos
84
12
0
06 Sep 2023
DeViL: Decoding Vision features into Language
Meghal Dani
Isabel Rio-Torto
Stephan Alaniz
Zeynep Akata
VLM
75
8
0
04 Sep 2023
FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods
Robin Hesse
Simone Schaub-Meyer
Stefan Roth
AAML
81
34
0
11 Aug 2023
Right for the Wrong Reason: Can Interpretable ML Techniques Detect Spurious Correlations?
Susu Sun
Lisa M. Koch
Christian F. Baumgartner
84
16
0
23 Jul 2023
B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers
Moritz D Boehle
Navdeeppal Singh
Mario Fritz
Bernt Schiele
157
27
0
19 Jun 2023
Probabilistic Concept Bottleneck Models
Eunji Kim
Dahuin Jung
Sangha Park
Siwon Kim
Sung-Hoon Yoon
143
72
0
02 Jun 2023
Towards credible visual model interpretation with path attribution
Naveed Akhtar
Muhammad A. A. K. Jalwana
FAtt
141
5
0
23 May 2023
Better Understanding Differences in Attribution Methods via Systematic Evaluations
Sukrut Rao
Moritz D Boehle
Bernt Schiele
XAI
93
4
0
21 Mar 2023
Adversarial Counterfactual Visual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
102
29
0
17 Mar 2023
Inherently Interpretable Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
98
17
0
01 Mar 2023
Variational Information Pursuit for Interpretable Predictions
Aditya Chattopadhyay
Kwan Ho Ryan Chan
B. Haeffele
D. Geman
René Vidal
DRL
106
14
0
06 Feb 2023
Neural Insights for Digital Marketing Content Design
F. Kong
Yuan Li
Houssam Nassif
Tanner Fiez
Ricardo Henao
Shreya Chakrabarti
3DV
58
12
0
02 Feb 2023
Holistically Explainable Vision Transformers
Moritz D Boehle
Mario Fritz
Bernt Schiele
ViT
95
9
0
20 Jan 2023
Evaluating Feature Attribution Methods for Electrocardiogram
J. Suh
Jimyeong Kim
Euna Jung
Wonjong Rhee
FAtt
45
2
0
23 Nov 2022
"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction
Sunnie S. Y. Kim
E. A. Watkins
Olga Russakovsky
Ruth C. Fong
Andrés Monroy-Hernández
99
115
0
02 Oct 2022
Interpretable by Design: Learning Predictors by Composing Interpretable Queries
Aditya Chattopadhyay
Stewart Slocum
B. Haeffele
René Vidal
D. Geman
111
24
0
03 Jul 2022
Towards Better Understanding Attribution Methods
Sukrut Rao
Moritz Bohle
Bernt Schiele
XAI
89
33
0
20 May 2022
B-cos Networks: Alignment is All We Need for Interpretability
Moritz D Boehle
Mario Fritz
Bernt Schiele
105
86
0
20 May 2022
Explaining Deep Convolutional Neural Networks via Latent Visual-Semantic Filter Attention
Yu Yang
Seung Wook Kim
Jungseock Joo
FAtt
61
17
0
10 Apr 2022
Diffusion Models for Counterfactual Explanations
Guillaume Jeanneret
Loïc Simon
F. Jurie
DiffM
118
59
0
29 Mar 2022
A Cognitive Explainer for Fetal ultrasound images classifier Based on Medical Concepts
Ying-Shuai Wanga
Yunxia Liua
Licong Dongc
Xuzhou Wua
Huabin Zhangb
Qiongyu Yed
Desheng Sunc
Xiaobo Zhoue
Kehong Yuan
59
0
0
19 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
161
119
0
06 Dec 2021
Optimising for Interpretability: Convolutional Dynamic Alignment Networks
Moritz D Boehle
Mario Fritz
Bernt Schiele
19
2
0
27 Sep 2021
A Comparison of Deep Saliency Map Generators on Multispectral Data in Object Detection
Jens Bayer
David Munch
Michael Arens
3DPC
66
4
0
26 Aug 2021
A Survey on Deep Domain Adaptation and Tiny Object Detection Challenges, Techniques and Datasets
Muhammed Muzammul
Xi Li
ObjD
96
11
0
16 Jul 2021
1