ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.13924
  4. Cited By
Benchmarking Deep Learning Interpretability in Time Series Predictions

Benchmarking Deep Learning Interpretability in Time Series Predictions

26 October 2020
Aya Abdelsalam Ismail
Mohamed K. Gunady
H. C. Bravo
S. Feizi
    XAI
    AI4TS
    FAtt
ArXivPDFHTML

Papers citing "Benchmarking Deep Learning Interpretability in Time Series Predictions"

19 / 19 papers shown
Title
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
103
0
0
11 Feb 2025
Unifying Prediction and Explanation in Time-Series Transformers via Shapley-based Pretraining
Qisen Cheng
Jinming Xing
Chang Xue
Xiaoran Yang
AI4TS
33
3
0
28 Jan 2025
Explanation Space: A New Perspective into Time Series Interpretability
Explanation Space: A New Perspective into Time Series Interpretability
Shahbaz Rezaei
Xin Liu
AI4TS
34
1
0
02 Sep 2024
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction
Melkamu Mersha
Khang Lam
Joseph Wood
Ali AlShami
Jugal Kalita
XAI
AI4TS
67
28
0
30 Aug 2024
On the Evaluation Consistency of Attribution-based Explanations
On the Evaluation Consistency of Attribution-based Explanations
Jiarui Duan
Haoling Li
Haofei Zhang
Hao Jiang
Mengqi Xue
Li Sun
Mingli Song
Jie Song
XAI
46
0
0
28 Jul 2024
CausalConceptTS: Causal Attributions for Time Series Classification
  using High Fidelity Diffusion Models
CausalConceptTS: Causal Attributions for Time Series Classification using High Fidelity Diffusion Models
Juan Miguel Lopez Alcaraz
Nils Strodthoff
DiffM
AI4TS
CML
29
2
0
24 May 2024
WEITS: A Wavelet-enhanced residual framework for interpretable time
  series forecasting
WEITS: A Wavelet-enhanced residual framework for interpretable time series forecasting
Ziyou Guo
Yan Sun
Tieru Wu
AI4TS
38
2
0
17 May 2024
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
19
5
0
29 May 2023
Data-Centric Debugging: mitigating model failures via targeted data
  collection
Data-Centric Debugging: mitigating model failures via targeted data collection
Sahil Singla
Atoosa Malemir Chegini
Mazda Moayeri
Soheil Feiz
16
4
0
17 Nov 2022
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency
  Towards Causal Explanations of Probabilistic Forecasts
Why Did This Model Forecast This Future? Closed-Form Temporal Saliency Towards Causal Explanations of Probabilistic Forecasts
Chirag Raman
Hayley Hung
Marco Loog
16
3
0
01 Jun 2022
Benchmarking Deep AUROC Optimization: Loss Functions and Algorithmic
  Choices
Benchmarking Deep AUROC Optimization: Loss Functions and Algorithmic Choices
Dixian Zhu
Xiaodong Wu
Tianbao Yang
28
10
0
27 Mar 2022
Identifying Suitable Tasks for Inductive Transfer Through the Analysis
  of Feature Attributions
Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature Attributions
Alexander Pugantsov
R. McCreadie
18
0
0
02 Feb 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an
  Attribution View
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
31
55
0
05 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
S. Feizi
FAtt
20
79
0
29 Nov 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
23
24
0
29 Jul 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
22
132
0
27 Apr 2021
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning
  Models on MIMIC-IV Dataset
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset
Chuizheng Meng
Loc Trinh
Nan Xu
Yan Liu
24
30
0
12 Feb 2021
What went wrong and when? Instance-wise Feature Importance for
  Time-series Models
What went wrong and when? Instance-wise Feature Importance for Time-series Models
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
D. Duvenaud
Anna Goldenberg
FAtt
OOD
AI4TS
49
14
0
05 Mar 2020
1