ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1412.6806
  4. Cited By
Striving for Simplicity: The All Convolutional Net

Striving for Simplicity: The All Convolutional Net

21 December 2014
Jost Tobias Springenberg
Alexey Dosovitskiy
Thomas Brox
Martin Riedmiller
    FAtt
ArXivPDFHTML

Papers citing "Striving for Simplicity: The All Convolutional Net"

50 / 697 papers shown
Title
Holistically Explainable Vision Transformers
Holistically Explainable Vision Transformers
Moritz D Boehle
Mario Fritz
Bernt Schiele
ViT
41
9
0
20 Jan 2023
Opti-CAM: Optimizing saliency maps for interpretability
Opti-CAM: Optimizing saliency maps for interpretability
Hanwei Zhang
Felipe Torres
R. Sicre
Yannis Avrithis
Stéphane Ayache
36
22
0
17 Jan 2023
Modulation spectral features for speech emotion recognition using deep
  neural networks
Modulation spectral features for speech emotion recognition using deep neural networks
Premjeet Singh
Md. Sahidullah
G. Saha
27
44
0
14 Jan 2023
Efficient Activation Function Optimization through Surrogate Modeling
Efficient Activation Function Optimization through Surrogate Modeling
G. Bingham
Risto Miikkulainen
18
2
0
13 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
44
2
0
03 Jan 2023
Deep Hierarchy Quantization Compression algorithm based on Dynamic
  Sampling
Deep Hierarchy Quantization Compression algorithm based on Dynamic Sampling
W. Jiang
Gang Liu
Xiaofeng Chen
Yipeng Zhou
FedML
14
0
0
30 Dec 2022
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Explainable AI for Bioinformatics: Methods, Tools, and Applications
Md. Rezaul Karim
Tanhim Islam
Oya Beyan
Christoph Lange
Michael Cochez
Dietrich-Rebholz Schuhmann
Stefan Decker
29
68
0
25 Dec 2022
DExT: Detector Explanation Toolkit
DExT: Detector Explanation Toolkit
Deepan Padmanabhan
Paul G. Plöger
Octavio Arriaga
Matias Valdenegro-Toro
33
2
0
21 Dec 2022
When and Why Test Generators for Deep Learning Produce Invalid Inputs:
  an Empirical Study
When and Why Test Generators for Deep Learning Produce Invalid Inputs: an Empirical Study
Vincenzo Riccio
Paolo Tonella
AAML
24
29
0
21 Dec 2022
Bort: Towards Explainable Neural Networks with Bounded Orthogonal
  Constraint
Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint
Borui Zhang
Wenzhao Zheng
Jie Zhou
Jiwen Lu
AAML
25
7
0
18 Dec 2022
Domain Generalization by Learning and Removing Domain-specific Features
Domain Generalization by Learning and Removing Domain-specific Features
Yuzhu Ding
Lei Wang
Binxin Liang
Shuming Liang
Yang Wang
Fangxiao Chen
OOD
30
41
0
14 Dec 2022
Comparing the Decision-Making Mechanisms by Transformers and CNNs via
  Explanation Methods
Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods
Ming-Xiu Jiang
Saeed Khorram
Li Fuxin
FAtt
22
9
0
13 Dec 2022
COmic: Convolutional Kernel Networks for Interpretable End-to-End
  Learning on (Multi-)Omics Data
COmic: Convolutional Kernel Networks for Interpretable End-to-End Learning on (Multi-)Omics Data
Jonas C. Ditz
Bernhard Reuter
Nícolas Pfeifer
29
1
0
02 Dec 2022
FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient
  Federated Learning
FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient Federated Learning
Young Geun Kim
Carole-Jean Wu
FedML
22
5
0
30 Nov 2022
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial
  Perturbations against Interpretable Deep Learning
Interpretations Cannot Be Trusted: Stealthy and Effective Adversarial Perturbations against Interpretable Deep Learning
Eldor Abdukhamidov
Mohammed Abuhamad
Simon S. Woo
Eric Chan-Tin
Tamer Abuhmed
AAML
30
9
0
29 Nov 2022
Towards More Robust Interpretation via Local Gradient Alignment
Towards More Robust Interpretation via Local Gradient Alignment
Sunghwan Joo
Seokhyeon Jeong
Juyeon Heo
Adrian Weller
Taesup Moon
FAtt
33
5
0
29 Nov 2022
Attribution-based XAI Methods in Computer Vision: A Review
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
32
18
0
27 Nov 2022
Evaluating Feature Attribution Methods for Electrocardiogram
Evaluating Feature Attribution Methods for Electrocardiogram
J. Suh
Jimyeong Kim
Euna Jung
Wonjong Rhee
FAtt
22
2
0
23 Nov 2022
Explaining Image Classifiers with Multiscale Directional Image
  Representation
Explaining Image Classifiers with Multiscale Directional Image Representation
Stefan Kolek
Robert Windesheim
Héctor Andrade-Loarca
Gitta Kutyniok
Ron Levie
29
4
0
22 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
103
0
17 Nov 2022
Parameter-Efficient Transformer with Hybrid Axial-Attention for Medical
  Image Segmentation
Parameter-Efficient Transformer with Hybrid Axial-Attention for Medical Image Segmentation
Yiyue Hu
Lei Zhang
Nan Mu
Leijun Liu
ViT
MedIm
22
1
0
17 Nov 2022
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
  Medicine
Explainable, Domain-Adaptive, and Federated Artificial Intelligence in Medicine
A. Chaddad
Qizong Lu
Jiali Li
Y. Katib
R. Kateb
C. Tanougast
Ahmed Bouridane
Ahmed Abdulkadir
OOD
24
38
0
17 Nov 2022
Explaining Cross-Domain Recognition with Interpretable Deep Classifier
Explaining Cross-Domain Recognition with Interpretable Deep Classifier
Yiheng Zhang
Ting Yao
Zhaofan Qiu
Tao Mei
OOD
35
3
0
15 Nov 2022
What Makes a Good Explanation?: A Harmonized View of Properties of
  Explanations
What Makes a Good Explanation?: A Harmonized View of Properties of Explanations
Zixi Chen
Varshini Subhash
Marton Havasi
Weiwei Pan
Finale Doshi-Velez
XAI
FAtt
36
18
0
10 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A
  Survey
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Privacy Meets Explainability: A Comprehensive Impact Benchmark
Privacy Meets Explainability: A Comprehensive Impact Benchmark
S. Saifullah
Dominique Mercier
Adriano Lucieri
Andreas Dengel
Sheraz Ahmed
35
14
0
08 Nov 2022
Exploring Explainability Methods for Graph Neural Networks
Exploring Explainability Methods for Graph Neural Networks
Harsh Patel
Shivam Sahni
9
0
0
03 Nov 2022
Explainable Deep Learning to Profile Mitochondrial Disease Using High
  Dimensional Protein Expression Data
Explainable Deep Learning to Profile Mitochondrial Disease Using High Dimensional Protein Expression Data
Atif Khan
C. Lawless
Amy Vincent
Satish Pilla
S. Ramesh
A. Mcgough
36
0
0
31 Oct 2022
HesScale: Scalable Computation of Hessian Diagonals
HesScale: Scalable Computation of Hessian Diagonals
Mohamed Elsayed
A. R. Mahmood
22
7
0
20 Oct 2022
XC: Exploring Quantitative Use Cases for Explanations in 3D Object
  Detection
XC: Exploring Quantitative Use Cases for Explanations in 3D Object Detection
Sunsheng Gu
Vahdat Abdelzad
Krzysztof Czarnecki
22
1
0
20 Oct 2022
Similarity of Neural Architectures using Adversarial Attack
  Transferability
Similarity of Neural Architectures using Adversarial Attack Transferability
Jaehui Hwang
Dongyoon Han
Byeongho Heo
Song Park
Sanghyuk Chun
Jong-Seok Lee
AAML
32
1
0
20 Oct 2022
Analysing Training-Data Leakage from Gradients through Linear Systems
  and Gradient Matching
Analysing Training-Data Leakage from Gradients through Linear Systems and Gradient Matching
Cangxiong Chen
Neill D. F. Campbell
FedML
34
1
0
20 Oct 2022
Toward the application of XAI methods in EEG-based systems
Toward the application of XAI methods in EEG-based systems
Andrea Apicella
Francesco Isgrò
A. Pollastro
R. Prevete
OOD
AI4TS
27
14
0
12 Oct 2022
AD-DROP: Attribution-Driven Dropout for Robust Language Model
  Fine-Tuning
AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Tao Yang
Jinghao Deng
Xiaojun Quan
Qifan Wang
Shaoliang Nie
32
3
0
12 Oct 2022
Quantitative Metrics for Evaluating Explanations of Video DeepFake
  Detectors
Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors
Federico Baldassarre
Quentin Debard
Gonzalo Fiz Pontiveros
Tri Kurniawan Wijaya
44
4
0
07 Oct 2022
Critical Learning Periods for Multisensory Integration in Deep Networks
Critical Learning Periods for Multisensory Integration in Deep Networks
Michael Kleinman
Alessandro Achille
Stefano Soatto
35
10
0
06 Oct 2022
Improving Convolutional Neural Networks for Fault Diagnosis by
  Assimilating Global Features
Improving Convolutional Neural Networks for Fault Diagnosis by Assimilating Global Features
Saif S. S. Al-Wahaibi
Qiugang Lu
16
2
0
03 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
83
35
0
28 Sep 2022
Recipro-CAM: Fast gradient-free visual explanations for convolutional
  neural networks
Recipro-CAM: Fast gradient-free visual explanations for convolutional neural networks
Seokhyun Byun
Won-Jo Lee
FAtt
39
4
0
28 Sep 2022
I-SPLIT: Deep Network Interpretability for Split Computing
I-SPLIT: Deep Network Interpretability for Split Computing
Federico Cunico
Luigi Capogrosso
Francesco Setti
D. Carra
Franco Fummi
Marco Cristani
35
14
0
23 Sep 2022
Review On Deep Learning Technique For Underwater Object Detection
Review On Deep Learning Technique For Underwater Object Detection
Radhwan Adnan Dakhil
A. R. Khayeat
19
11
0
21 Sep 2022
Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
Learning Symbolic Model-Agnostic Loss Functions via Meta-Learning
Christian Raymond
Qi Chen
Bing Xue
Mengjie Zhang
FedML
29
11
0
19 Sep 2022
Look where you look! Saliency-guided Q-networks for generalization in
  visual Reinforcement Learning
Look where you look! Saliency-guided Q-networks for generalization in visual Reinforcement Learning
David Bertoin
Adil Zouitine
Mehdi Zouitine
Emmanuel Rachelson
36
30
0
16 Sep 2022
Explainable AI for clinical and remote health applications: a survey on
  tabular and time series data
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
28
90
0
14 Sep 2022
DASH: Visual Analytics for Debiasing Image Classification via
  User-Driven Synthetic Data Augmentation
DASH: Visual Analytics for Debiasing Image Classification via User-Driven Synthetic Data Augmentation
Bum Chul Kwon
Jungsoo Lee
Chaeyeon Chung
Nyoungwoo Lee
Ho-Jin Choi
Jaegul Choo
39
10
0
14 Sep 2022
Boosting Robustness Verification of Semantic Feature Neighborhoods
Boosting Robustness Verification of Semantic Feature Neighborhoods
Anan Kabaha
Dana Drachsler-Cohen
AAML
34
6
0
12 Sep 2022
Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps
Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps
Mohammadreza Amirian
Friedhelm Schwenker
Thilo Stadelmann
AAML
21
16
0
24 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
36
12
0
19 Aug 2022
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in
  Artificial Neural Networks
Gradient Mask: Lateral Inhibition Mechanism Improves Performance in Artificial Neural Networks
Lei Jiang
Yongqing Liu
Shihai Xiao
Yansong Chua
33
0
0
14 Aug 2022
The Weighting Game: Evaluating Quality of Explainability Methods
The Weighting Game: Evaluating Quality of Explainability Methods
Lassi Raatikainen
Esa Rahtu
FAtt
XAI
34
4
0
12 Aug 2022
Previous
123456...121314
Next