ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.14338
  4. Cited By
Improving Deep Learning Interpretability by Saliency Guided Training

Improving Deep Learning Interpretability by Saliency Guided Training

29 November 2021
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
    FAtt
ArXivPDFHTML

Papers citing "Improving Deep Learning Interpretability by Saliency Guided Training"

43 / 43 papers shown
Title
Generalized Semantic Contrastive Learning via Embedding Side Information for Few-Shot Object Detection
Generalized Semantic Contrastive Learning via Embedding Side Information for Few-Shot Object Detection
Ruoyu Chen
Hua Zhang
Jingzhi Li
Li Liu
Zhen Huang
Xiaochun Cao
32
0
0
09 Apr 2025
Time-series attribution maps with regularized contrastive learning
Time-series attribution maps with regularized contrastive learning
Steffen Schneider
Rodrigo González Laiz
Anastasiia Filippova
Markus Frey
Mackenzie W. Mathis
BDL
FAtt
CML
AI4TS
73
0
0
17 Feb 2025
Quantized and Interpretable Learning Scheme for Deep Neural Networks in
  Classification Task
Quantized and Interpretable Learning Scheme for Deep Neural Networks in Classification Task
Alireza Maleki
Mahsa Lavaei
Mohsen Bagheritabar
Salar Beigzad
Zahra Abadi
MQ
62
0
0
05 Dec 2024
ConLUX: Concept-Based Local Unified Explanations
ConLUX: Concept-Based Local Unified Explanations
Junhao Liu
Haonan Yu
Xin Zhang
FAtt
LRM
21
0
0
16 Oct 2024
The Overfocusing Bias of Convolutional Neural Networks: A
  Saliency-Guided Regularization Approach
The Overfocusing Bias of Convolutional Neural Networks: A Saliency-Guided Regularization Approach
David Bertoin
Eduardo Hugo Sanchez
Mehdi Zouitine
Emmanuel Rachelson
18
0
0
25 Sep 2024
Improving Network Interpretability via Explanation Consistency
  Evaluation
Improving Network Interpretability via Explanation Consistency Evaluation
Hefeng Wu
Hao Jiang
Keze Wang
Ziyi Tang
Xianghuan He
Liang Lin
FAtt
AAML
21
0
0
08 Aug 2024
Explanation Regularisation through the Lens of Attributions
Explanation Regularisation through the Lens of Attributions
Pedro Ferreira
Wilker Aziz
Ivan Titov
18
1
0
23 Jul 2024
Exploring the Interplay of Interpretability and Robustness in Deep
  Neural Networks: A Saliency-guided Approach
Exploring the Interplay of Interpretability and Robustness in Deep Neural Networks: A Saliency-guided Approach
Amira Guesmi
Nishant Suresh Aswani
Muhammad Shafique
FAtt
AAML
19
1
0
10 May 2024
Explainable Interface for Human-Autonomy Teaming: A Survey
Explainable Interface for Human-Autonomy Teaming: A Survey
Xiangqi Kong
Yang Xing
Antonios Tsourdos
Ziyue Wang
Weisi Guo
Adolfo Perrusquía
Andreas Wikander
30
3
0
04 May 2024
CA-Stream: Attention-based pooling for interpretable image recognition
CA-Stream: Attention-based pooling for interpretable image recognition
Felipe Torres
Hanwei Zhang
R. Sicre
Stéphane Ayache
Yannis Avrithis
36
0
0
23 Apr 2024
Bayesian Neural Networks with Domain Knowledge Priors
Bayesian Neural Networks with Domain Knowledge Priors
Dylan Sam
Rattana Pukdee
Daniel P. Jeong
Yewon Byun
J. Zico Kolter
BDL
UQCV
22
9
0
20 Feb 2024
Taking Training Seriously: Human Guidance and Management-Based
  Regulation of Artificial Intelligence
Taking Training Seriously: Human Guidance and Management-Based Regulation of Artificial Intelligence
C. Coglianese
Colton R. Crum
FaML
24
2
0
13 Feb 2024
Towards Faithful Explanations for Text Classification with Robustness
  Improvement and Explanation Guided Training
Towards Faithful Explanations for Text Classification with Robustness Improvement and Explanation Guided Training
Dongfang Li
Baotian Hu
Qingcai Chen
Shan He
21
4
0
29 Dec 2023
SCAAT: Improving Neural Network Interpretability via Saliency
  Constrained Adaptive Adversarial Training
SCAAT: Improving Neural Network Interpretability via Saliency Constrained Adaptive Adversarial Training
Rui Xu
Wenkang Qin
Peixiang Huang
Hao Wang
Lin Luo
FAtt
AAML
12
2
0
09 Nov 2023
MENTOR: Human Perception-Guided Pretraining for Increased Generalization
MENTOR: Human Perception-Guided Pretraining for Increased Generalization
Colton R. Crum
Adam Czajka
51
1
0
30 Oct 2023
REFER: An End-to-end Rationale Extraction Framework for Explanation
  Regularization
REFER: An End-to-end Rationale Extraction Framework for Explanation Regularization
Mohammad Reza Ghasemi Madani
Pasquale Minervini
12
4
0
22 Oct 2023
Make Your Decision Convincing! A Unified Two-Stage Framework:
  Self-Attribution and Decision-Making
Make Your Decision Convincing! A Unified Two-Stage Framework: Self-Attribution and Decision-Making
Yanrui Du
Sendong Zhao
Hao Wang
Yuhan Chen
Rui Bai
Zewen Qiang
Muzhen Cai
Bing Qin
16
0
0
20 Oct 2023
Explanation-based Training with Differentiable Insertion/Deletion
  Metric-aware Regularizers
Explanation-based Training with Differentiable Insertion/Deletion Metric-aware Regularizers
Yuya Yoshikawa
Tomoharu Iwata
14
0
0
19 Oct 2023
SMOOT: Saliency Guided Mask Optimized Online Training
SMOOT: Saliency Guided Mask Optimized Online Training
Ali Karkehabadi
Houman Homayoun
Avesta Sasan
AAML
16
16
0
01 Oct 2023
Interpretability-Aware Vision Transformer
Interpretability-Aware Vision Transformer
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
74
7
0
14 Sep 2023
Explainable AI for clinical risk prediction: a survey of concepts,
  methods, and modalities
Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities
Munib Mesinovic
Peter Watkinson
Ting Zhu
FaML
14
3
0
16 Aug 2023
TrajPAC: Towards Robustness Verification of Pedestrian Trajectory
  Prediction Models
TrajPAC: Towards Robustness Verification of Pedestrian Trajectory Prediction Models
Liang Zhang
Nathaniel Xu
Pengfei Yang
Gao Jin
Cheng-Chao Huang
Lijun Zhang
18
8
0
11 Aug 2023
Explanation-Guided Fair Federated Learning for Transparent 6G RAN
  Slicing
Explanation-Guided Fair Federated Learning for Transparent 6G RAN Slicing
Swastika Roy
Hatim Chergui
C. Verikoukis
FedML
17
2
0
18 Jul 2023
Designing a Direct Feedback Loop between Humans and Convolutional Neural
  Networks through Local Explanations
Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
Tong Sun
Yuyang Gao
Shubham Khaladkar
Sijia Liu
Liang Zhao
Younghoon Kim
S. Hong
AAML
FAtt
HAI
22
6
0
08 Jul 2023
Does Saliency-Based Training bring Robustness for Deep Neural Networks
  in Image Classification?
Does Saliency-Based Training bring Robustness for Deep Neural Networks in Image Classification?
Ali Karkehabadi
FAtt
AAML
6
0
0
28 Jun 2023
Encoding Time-Series Explanations through Self-Supervised Model Behavior
  Consistency
Encoding Time-Series Explanations through Self-Supervised Model Behavior Consistency
Owen Queen
Thomas Hartvigsen
Teddy Koker
Huan He
Theodoros Tsiligkaridis
Marinka Zitnik
AI4TS
37
16
0
03 Jun 2023
A Neural Emulator for Uncertainty Estimation of Fire Propagation
A Neural Emulator for Uncertainty Estimation of Fire Propagation
Andrew Bolt
Conrad Sanderson
J. Dabrowski
C. Huston
Petra Kuhnert
11
3
0
10 May 2023
On Pitfalls of $\textit{RemOve-And-Retrain}$: Data Processing Inequality
  Perspective
On Pitfalls of RemOve-And-Retrain\textit{RemOve-And-Retrain}RemOve-And-Retrain: Data Processing Inequality Perspective
J. Song
Keumgang Cha
Junghoon Seo
27
2
0
26 Apr 2023
Learning with Explanation Constraints
Learning with Explanation Constraints
Rattana Pukdee
Dylan Sam
J. Zico Kolter
Maria-Florina Balcan
Pradeep Ravikumar
FAtt
17
6
0
25 Mar 2023
Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental
  Learning
Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning
Xialei Liu
Jiang-Tian Zhai
Andrew D. Bagdanov
Ke Li
Mingg-Ming Cheng
CLL
10
4
0
16 Dec 2022
Temporal Saliency Detection Towards Explainable Transformer-based
  Timeseries Forecasting
Temporal Saliency Detection Towards Explainable Transformer-based Timeseries Forecasting
Nghia Duong-Trung
Kiran Madhusudhanan
Danh Le-Phuoc
AI4TS
30
4
0
15 Dec 2022
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
Yuyang Gao
Siyi Gu
Junji Jiang
S. Hong
Dazhou Yu
Liang Zhao
24
38
0
07 Dec 2022
Saliency Map Verbalization: Comparing Feature Importance Representations
  from Model-free and Instruction-based Methods
Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods
Nils Feldhus
Leonhard Hennig
Maximilian Dustin Nasert
Christopher Ebert
Robert Schwarzenberg
Sebastian Möller
FAtt
8
19
0
13 Oct 2022
Look where you look! Saliency-guided Q-networks for generalization in
  visual Reinforcement Learning
Look where you look! Saliency-guided Q-networks for generalization in visual Reinforcement Learning
David Bertoin
Adil Zouitine
Mehdi Zouitine
Emmanuel Rachelson
12
17
0
16 Sep 2022
Saliency Guided Adversarial Training for Learning Generalizable Features
  with Applications to Medical Imaging Classification System
Saliency Guided Adversarial Training for Learning Generalizable Features with Applications to Medical Imaging Classification System
Xin Li
Yao Qiang
Chengyin Li
Sijia Liu
D. Zhu
OOD
MedIm
21
4
0
09 Sep 2022
Improving Disease Classification Performance and Explainability of Deep
  Learning Models in Radiology with Heatmap Generators
Improving Disease Classification Performance and Explainability of Deep Learning Models in Radiology with Heatmap Generators
A. Watanabe
Sara Ketabi
Khashayar Namdar
Namdar
Farzad Khalvati
14
8
0
28 Jun 2022
VisFIS: Visual Feature Importance Supervision with
  Right-for-the-Right-Reason Objectives
VisFIS: Visual Feature Importance Supervision with Right-for-the-Right-Reason Objectives
Zhuofan Ying
Peter Hase
Mohit Bansal
LRM
15
13
0
22 Jun 2022
Core Risk Minimization using Salient ImageNet
Core Risk Minimization using Salient ImageNet
Sahil Singla
Mazda Moayeri
S. Feizi
12
14
0
28 Mar 2022
Beyond Explaining: Opportunities and Challenges of XAI-Based Model
  Improvement
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Leander Weber
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
41
87
0
15 Mar 2022
UNIREX: A Unified Learning Framework for Language Model Rationale
  Extraction
UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan
Maziar Sanjabi
Lambert Mathias
L Tan
Shaoliang Nie
Xiaochang Peng
Xiang Ren
Hamed Firooz
14
41
0
16 Dec 2021
Feature Alignment as a Generative Process
Feature Alignment as a Generative Process
T. S. Farias
Jonas Maziero
DiffM
BDL
12
1
0
23 Jun 2021
What went wrong and when? Instance-wise Feature Importance for
  Time-series Models
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
D. Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
44
14
0
05 Mar 2020
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
252
618
0
04 Dec 2018
1