ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6806
  4. Cited By
Striving for Simplicity: The All Convolutional Net

Striving for Simplicity: The All Convolutional Net

21 December 2014
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
    FAtt
ArXivPDFHTML

Papers citing "Striving for Simplicity: The All Convolutional Net"

50 / 696 papers shown
Title
Feature Visualization in 3D Convolutional Neural Networks
Feature Visualization in 3D Convolutional Neural Networks
Chunpeng Li
Ya-tang Li
FAtt
23
0
0
12 May 2025
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
From Pixels to Perception: Interpretable Predictions via Instance-wise Grouped Feature Selection
Moritz Vandenhirtz
Julia E. Vogt
38
0
0
09 May 2025
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review
Sonal Allana
Mohan Kankanhalli
Rozita Dara
32
0
0
05 May 2025
Explainable Face Recognition via Improved Localization
Explainable Face Recognition via Improved Localization
Rashik Shadman
Daqing Hou
Faraz Hussain
M. G. Sarwar Murshed
CVBM
FAtt
31
0
0
04 May 2025
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
105
1
0
13 Mar 2025
LED-Merging: Mitigating Safety-Utility Conflicts in Model Merging with Location-Election-Disjoint
LED-Merging: Mitigating Safety-Utility Conflicts in Model Merging with Location-Election-Disjoint
Qianli Ma
Dongrui Liu
Qian Chen
Linfeng Zhang
Jing Shao
MoMe
168
0
0
24 Feb 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
150
0
0
17 Feb 2025
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks
Jon Vadillo
Roberto Santana
J. A. Lozano
Marta Z. Kwiatkowska
BDL
AAML
73
0
0
17 Feb 2025
Explaining 3D Computed Tomography Classifiers with Counterfactuals
Explaining 3D Computed Tomography Classifiers with Counterfactuals
Joseph Paul Cohen
Louis Blankemeier
Akshay S. Chaudhari
MedIm
176
0
0
11 Feb 2025
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment
Harrish Thasarathan
Julian Forsyth
Thomas Fel
M. Kowal
Konstantinos G. Derpanis
111
7
0
06 Feb 2025
Deep Unfolding Multi-modal Image Fusion Network via Attribution Analysis
Deep Unfolding Multi-modal Image Fusion Network via Attribution Analysis
Haowen Bai
Zixiang Zhao
Jiangshe Zhang
Baisong Jiang
Lilun Deng
Yukun Cui
Shuang Xu
Chunxia Zhang
60
2
0
03 Feb 2025
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable
Shreyash Arya
Sukrut Rao
Moritz Bohle
Bernt Schiele
68
2
0
28 Jan 2025
Pfungst and Clever Hans: Identifying the unintended cues in a widely used Alzheimer's disease MRI dataset using explainable deep learning
Pfungst and Clever Hans: Identifying the unintended cues in a widely used Alzheimer's disease MRI dataset using explainable deep learning
C. Tinauer
Maximilian Sackl
Rudolf Stollberger
Stefan Ropele
C. Langkammer
AAML
40
0
0
27 Jan 2025
Generating visual explanations from deep networks using implicit neural representations
Generating visual explanations from deep networks using implicit neural representations
Michal Byra
Henrik Skibbe
GAN
FAtt
29
0
0
20 Jan 2025
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein
Carsten T. Lüth
U. Schlegel
Till J. Bungert
Mennatallah El-Assady
Paul F. Jäger
XAI
ELM
42
2
0
03 Jan 2025
Multi-Head Explainer: A General Framework to Improve Explainability in CNNs and Transformers
Multi-Head Explainer: A General Framework to Improve Explainability in CNNs and Transformers
Bohang Sun
Pietro Liò
ViT
AAML
40
1
0
02 Jan 2025
Explaining the Impact of Training on Vision Models via Activation Clustering
Explaining the Impact of Training on Vision Models via Activation Clustering
Ahcène Boubekki
Samuel G. Fadel
Sebastian Mair
91
0
0
29 Nov 2024
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Interplay between Federated Learning and Explainable Artificial Intelligence: a Scoping Review
Luis M. Lopez-Ramos
Florian Leiser
Aditya Rastogi
Steven Hicks
Inga Strümke
V. Madai
Tobias Budig
Ali Sunyaev
A. Hilbert
30
1
0
07 Nov 2024
Debiasing Mini-Batch Quadratics for Applications in Deep Learning
Debiasing Mini-Batch Quadratics for Applications in Deep Learning
Lukas Tatzel
Bálint Mucsányi
Osane Hackel
Philipp Hennig
43
0
0
18 Oct 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
47
0
0
10 Oct 2024
Riemann Sum Optimization for Accurate Integrated Gradients Computation
Riemann Sum Optimization for Accurate Integrated Gradients Computation
Swadesh Swain
Shree Singhi
28
0
0
05 Oct 2024
Tackling the Accuracy-Interpretability Trade-off in a Hierarchy of Machine Learning Models for the Prediction of Extreme Heatwaves
Tackling the Accuracy-Interpretability Trade-off in a Hierarchy of Machine Learning Models for the Prediction of Extreme Heatwaves
Alessandro Lovo
Amaury Lancelin
Corentin Herbert
Freddy Bouchet
AI4CE
28
0
0
01 Oct 2024
Beyond Skip Connection: Pooling and Unpooling Design for Elimination
  Singularities
Beyond Skip Connection: Pooling and Unpooling Design for Elimination Singularities
Chengkun Sun
Jinqian Pan
Juoli Jin
Russell Stevens Terry
Jiang Bian
Jie Xu
22
0
0
20 Sep 2024
Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data
Explainable AI for Autism Diagnosis: Identifying Critical Brain Regions Using fMRI Data
Suryansh Vidya
Kush Gupta
Amir Aly
Andy Wills
Emmanuel Ifeachor
Rohit Shankar
39
1
0
19 Sep 2024
Explanation Space: A New Perspective into Time Series Interpretability
Explanation Space: A New Perspective into Time Series Interpretability
Shahbaz Rezaei
Xin Liu
AI4TS
34
1
0
02 Sep 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
74
28
0
30 Aug 2024
Towards Certified Unlearning for Deep Neural Networks
Towards Certified Unlearning for Deep Neural Networks
Binchi Zhang
Yushun Dong
Tianhao Wang
Wenlin Yao
MU
64
7
0
01 Aug 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Mingli Song
XAI
46
1
0
28 Jul 2024
Don't Fear Peculiar Activation Functions: EUAF and Beyond
Don't Fear Peculiar Activation Functions: EUAF and Beyond
Qianchao Wang
Shijun Zhang
Dong Zeng
Zhaoheng Xie
Hengtao Guo
Feng-Lei Fan
Tieyong Zeng
42
3
0
12 Jul 2024
Explaining Graph Neural Networks for Node Similarity on Graphs
Explaining Graph Neural Networks for Node Similarity on Graphs
Daniel Daza
C. Chu
T. Tran
Daria Stepanova
Michael Cochez
Paul T. Groth
36
1
0
10 Jul 2024
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Restyling Unsupervised Concept Based Interpretable Networks with Generative Models
Jayneel Parekh
Quentin Bouniot
Pavlo Mozharovskyi
A. Newson
Florence dÁlché-Buc
SSL
63
1
0
01 Jul 2024
Machine Learning Techniques in Automatic Music Transcription: A
  Systematic Survey
Machine Learning Techniques in Automatic Music Transcription: A Systematic Survey
Fatemeh Jamshidi
Gary Pike
Amit Das
Richard Chapman
39
4
0
20 Jun 2024
Phoneme Discretized Saliency Maps for Explainable Detection of
  AI-Generated Voice
Phoneme Discretized Saliency Maps for Explainable Detection of AI-Generated Voice
Shubham Gupta
Mirco Ravanelli
Pascal Germain
Cem Subakan
FAtt
45
3
0
14 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAI
FAtt
49
2
0
11 Jun 2024
Enhancing predictive imaging biomarker discovery through treatment
  effect analysis
Enhancing predictive imaging biomarker discovery through treatment effect analysis
Shuhan Xiao
Lukas Klein
Jens Petersen
Philipp Vollmuth
Paul F. Jaeger
Klaus H. Maier-Hein
32
0
0
04 Jun 2024
CONFINE: Conformal Prediction for Interpretable Neural Networks
CONFINE: Conformal Prediction for Interpretable Neural Networks
Linhui Huang
S. Lala
N. Jha
68
2
0
01 Jun 2024
Manifold Integrated Gradients: Riemannian Geometry for Feature
  Attribution
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution
Eslam Zaher
Maciej Trzaskowski
Quan Nguyen
Fred Roosta
AAML
29
4
0
16 May 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
48
5
0
03 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
61
5
0
02 May 2024
Rad4XCNN: a new agnostic method for post-hoc global explanation of CNN-derived features by means of radiomics
Rad4XCNN: a new agnostic method for post-hoc global explanation of CNN-derived features by means of radiomics
Francesco Prinzi
C. Militello
Calogero Zarcaro
T. Bartolotta
Salvatore Gaglio
Salvatore Vitabile
26
1
0
26 Apr 2024
A Learning Paradigm for Interpretable Gradients
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
23
0
0
23 Apr 2024
Machine Unlearning via Null Space Calibration
Machine Unlearning via Null Space Calibration
Huiqiang Chen
Tianqing Zhu
Xin Yu
Wanlei Zhou
41
6
0
21 Apr 2024
Structured Gradient-based Interpretations via Norm-Regularized
  Adversarial Training
Structured Gradient-based Interpretations via Norm-Regularized Adversarial Training
Shizhan Gong
Qi Dou
Farzan Farnia
FAtt
42
2
0
06 Apr 2024
A Peg-in-hole Task Strategy for Holes in Concrete
A Peg-in-hole Task Strategy for Holes in Concrete
André Yuji Yasutomi
Hiroki Mori
Tetsuya Ogata
26
15
0
29 Mar 2024
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Uncertainty Quantification for Gradient-based Explanations in Neural Networks
Mihir Mulye
Matias Valdenegro-Toro
UQCV
FAtt
33
0
0
25 Mar 2024
What Sketch Explainability Really Means for Downstream Tasks
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Explainable Learning with Gaussian Processes
Explainable Learning with Gaussian Processes
Kurt Butler
Guanchao Feng
P. Djuric
39
1
0
11 Mar 2024
Are Classification Robustness and Explanation Robustness Really Strongly
  Correlated? An Analysis Through Input Loss Landscape
Are Classification Robustness and Explanation Robustness Really Strongly Correlated? An Analysis Through Input Loss Landscape
Tiejin Chen
Wenwang Huang
Linsey Pang
Dongsheng Luo
Hua Wei
OOD
49
0
0
09 Mar 2024
Feature CAM: Interpretable AI in Image Classification
Feature CAM: Interpretable AI in Image Classification
Frincy Clement
Ji Yang
Irene Cheng
FAtt
33
1
0
08 Mar 2024
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Mudjacking: Patching Backdoor Vulnerabilities in Foundation Models
Hongbin Liu
Michael K. Reiter
Neil Zhenqiang Gong
AAML
38
2
0
22 Feb 2024
1234...121314
Next