Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.21595
Cited By
Relevance-driven Input Dropout: an Explanation-guided Regularization Technique
27 May 2025
Shreyas Gururaj
Lars Grüne
Wojciech Samek
Sebastian Lapuschkin
Leander Weber
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Relevance-driven Input Dropout: an Explanation-guided Regularization Technique"
50 / 59 papers shown
Title
Mechanistic understanding and validation of large AI models with SemanticLens
Maximilian Dreyer
J. Berend
Tobias Labarta
Johanna Vielhaben
Thomas Wiegand
Sebastian Lapuschkin
Wojciech Samek
48
3
0
10 Jan 2025
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Reduan Achtibat
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
ViT
45
2
0
22 Aug 2024
The Missing Curve Detectors of InceptionV1: Applying Sparse Autoencoders to InceptionV1 Early Vision
Liv Gorton
31
16
0
06 Jun 2024
Interpreting CLIP with Sparse Linear Concept Embeddings (SpLiCE)
Usha Bhalla
Alexander X. Oesterling
Suraj Srinivas
Flavio du Pin Calmon
Himabindu Lakkaraju
71
38
0
16 Feb 2024
AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers
Reduan Achtibat
Sayed Mohammad Vakilzadeh Hatefi
Maximilian Dreyer
Aakriti Jain
Thomas Wiegand
Sebastian Lapuschkin
Wojciech Samek
47
31
0
08 Feb 2024
Understanding Video Transformers via Universal Concept Discovery
M. Kowal
Achal Dave
Rares Andrei Ambrus
Adrien Gaidon
Konstantinos G. Derpanis
P. Tokmakov
ViT
72
9
0
19 Jan 2024
Sparse Autoencoders Find Highly Interpretable Features in Language Models
Hoagy Cunningham
Aidan Ewart
Logan Riggs
R. Huben
Lee Sharkey
MILM
64
382
0
15 Sep 2023
Multi-dimensional concept discovery (MCD): A unifying framework with completeness guarantees
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
44
39
0
27 Jan 2023
AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Tao Yang
Jinghao Deng
Xiaojun Quan
Qifan Wang
Shaoliang Nie
50
4
0
12 Oct 2022
From Attribution Maps to Human-Understandable Explanations through Concept Relevance Propagation
Reduan Achtibat
Maximilian Dreyer
Ilona Eisenbraun
S. Bosse
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
FAtt
46
138
0
07 Jun 2022
Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Sami Ede
Serop Baghdadlian
Leander Weber
A. Nguyen
Dario Zanca
Wojciech Samek
Sebastian Lapuschkin
CLL
36
7
0
04 May 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
677
12,525
0
04 Mar 2022
Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
Anna Hedström
Leander Weber
Dilyara Bareeva
Daniel G. Krakowczyk
Franz Motzkus
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
XAI
ELM
35
173
0
14 Feb 2022
Measurably Stronger Explanation Reliability via Model Canonization
Franz Motzkus
Leander Weber
Sebastian Lapuschkin
FAtt
33
7
0
14 Feb 2022
ECQ
x
^{\text{x}}
x
: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
Daniel Becking
Maximilian Dreyer
Wojciech Samek
Karsten Müller
Sebastian Lapuschkin
MQ
286
14
0
09 Sep 2021
R-Drop: Regularized Dropout for Neural Networks
Xiaobo Liang
Lijun Wu
Juntao Li
Yue Wang
Qi Meng
Tao Qin
Wei Chen
Hao Fei
Tie-Yan Liu
59
430
0
28 Jun 2021
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
57
65
0
24 Jun 2021
Large-scale Unsupervised Semantic Segmentation
Shangqi Gao
Zhong-Yu Li
Ming-Hsuan Yang
Mingg-Ming Cheng
Junwei Han
Philip Torr
UQCV
80
86
0
06 Jun 2021
Noise Injection-based Regularization for Point Cloud Processing
Xiao Zang
Yi Xie
Siyu Liao
Jie Chen
Bo Yuan
3DPC
35
2
0
28 Mar 2021
PCT: Point cloud transformer
Meng-Hao Guo
Junxiong Cai
Zheng-Ning Liu
Tai-Jiang Mu
Ralph Robert Martin
Shimin Hu
ViT
3DPC
112
1,599
0
17 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
314
40,217
0
22 Oct 2020
Utilizing Explainable AI for Quantization and Pruning of Deep Neural Networks
Muhammad Sabih
Frank Hannig
J. Teich
MQ
59
23
0
20 Aug 2020
Explicit Regularisation in Gaussian Noise Injections
A. Camuto
M. Willetts
Umut Simsekli
Stephen J. Roberts
Chris Holmes
43
57
0
14 Jul 2020
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
Dan Hendrycks
Steven Basart
Norman Mu
Saurav Kadavath
Frank Wang
...
Samyak Parajuli
Mike Guo
D. Song
Jacob Steinhardt
Justin Gilmer
OOD
252
1,715
0
29 Jun 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
54
153
0
16 Mar 2020
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems
Vineel Nagisetty
Laura Graves
Joseph Scott
Vijay Ganesh
GAN
DRL
40
28
0
24 Feb 2020
Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning
Seul-Ki Yeom
P. Seegerer
Sebastian Lapuschkin
Alexander Binder
Simon Wiedemann
K. Müller
Wojciech Samek
CVBM
35
203
0
18 Dec 2019
Natural Adversarial Examples
Dan Hendrycks
Kevin Zhao
Steven Basart
Jacob Steinhardt
D. Song
OODD
164
1,454
0
16 Jul 2019
When Does Label Smoothing Help?
Rafael Müller
Simon Kornblith
Geoffrey E. Hinton
UQCV
125
1,931
0
06 Jun 2019
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned
Elena Voita
David Talbot
F. Moiseev
Rico Sennrich
Ivan Titov
76
1,120
0
23 May 2019
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Sangdoo Yun
Dongyoon Han
Seong Joon Oh
Sanghyuk Chun
Junsuk Choe
Y. Yoo
OOD
572
4,735
0
13 May 2019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman
Haekyu Park
Caleb Robinson
Duen Horng Chau
FAtt
3DH
HAI
37
215
0
04 Apr 2019
EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
Jason W. Wei
Kai Zou
76
1,931
0
31 Jan 2019
GAN Augmentation: Augmenting Training Data using Generative Adversarial Networks
Christopher Bowles
Liang Chen
Ricardo Guerrero
P. Bentley
R. Gunn
A. Hammers
D. A. Dickie
M. V. Valdés Hernández
Joanna M. Wardlaw
Daniel Rueckert
GAN
MedIm
54
424
0
25 Oct 2018
Dynamic Graph CNN for Learning on Point Clouds
Yue Wang
Yongbin Sun
Ziwei Liu
Sanjay E. Sarma
M. Bronstein
Justin Solomon
GNN
3DPC
223
6,076
0
24 Jan 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
158
1,817
0
30 Nov 2017
mixup: Beyond Empirical Risk Minimization
Hongyi Zhang
Moustapha Cissé
Yann N. Dauphin
David Lopez-Paz
NoLa
233
9,687
0
25 Oct 2017
Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization
Hyeonwoo Noh
Tackgeun You
Jonghwan Mun
Bohyung Han
NoLa
58
198
0
14 Oct 2017
Random Erasing Data Augmentation
Zhun Zhong
Liang Zheng
Guoliang Kang
Shaozi Li
Yi Yang
67
3,614
0
16 Aug 2017
PatchShuffle Regularization
Guoliang Kang
Xuanyi Dong
Liang Zheng
Yi Yang
39
75
0
22 Jul 2017
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
C. Qi
L. Yi
Hao Su
Leonidas Guibas
3DPC
3DV
249
11,000
0
07 Jun 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
446
21,459
0
22 May 2017
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
80
1,503
1
19 Apr 2017
Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-supervised Object and Action Localization
Krishna Kumar Singh
Yong Jae Lee
64
679
0
13 Apr 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
45
1,514
0
11 Apr 2017
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
119
3,848
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
108
5,920
0
04 Mar 2017
Visualizing Deep Neural Network Decisions: Prediction Difference Analysis
L. Zintgraf
Taco S. Cohen
T. Adel
Max Welling
FAtt
106
707
0
15 Feb 2017
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
C. Qi
Hao Su
Kaichun Mo
Leonidas Guibas
3DH
3DPC
3DV
PINN
395
14,191
0
02 Dec 2016
DisturbLabel: Regularizing CNN on the Loss Layer
Lingxi Xie
Jingdong Wang
Zhen Wei
Meng Wang
Qi Tian
61
251
0
30 Apr 2016
1
2
Next