ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.02821
  4. Cited By
What went wrong and when? Instance-wise Feature Importance for
  Time-series Models
v1v2v3 (latest)

What went wrong and when? Instance-wise Feature Importance for Time-series Models

5 March 2020
S. Tonekaboni
Shalmali Joshi
Kieran Campbell
David Duvenaud
Anna Goldenberg
    FAttOODAI4TS
ArXiv (abs)PDFHTML

Papers citing "What went wrong and when? Instance-wise Feature Importance for Time-series Models"

9 / 9 papers shown
Clairvoyance: A Pipeline Toolkit for Medical Time Series
Clairvoyance: A Pipeline Toolkit for Medical Time SeriesInternational Conference on Learning Representations (ICLR), 2023
Daniel Jarrett
Chang Jo Kim
Ioana Bica
Zhaozhi Qian
A. Ercole
M. Schaar
AI4TS
273
42
0
28 Oct 2023
Counterfactual Explanations and Predictive Models to Enhance Clinical
  Decision-Making in Schizophrenia using Digital Phenotyping
Counterfactual Explanations and Predictive Models to Enhance Clinical Decision-Making in Schizophrenia using Digital Phenotyping
Juan Sebastián Canas
Francisco Gomez
Omar Costilla-Reyes
205
2
0
06 Jun 2023
Class-Specific Explainability for Deep Time Series Classifiers
Class-Specific Explainability for Deep Time Series ClassifiersIndustrial Conference on Data Mining (IDM), 2022
Ramesh Doddaiah
Prathyush S. Parvatharaju
Elke A. Rundensteiner
Thomas Hartvigsen
FAttAI4TS
230
5
0
11 Oct 2022
Explainable AI for tailored electricity consumption feedback -- an
  experimental evaluation of visualizations
Explainable AI for tailored electricity consumption feedback -- an experimental evaluation of visualizationsEuropean Conference on Information Systems (ECIS), 2022
Jacqueline Wastensteiner
T. Weiß
Felix Haag
K. Hopf
142
13
0
24 Aug 2022
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided TrainingNeural Information Processing Systems (NeurIPS), 2021
Aya Abdelsalam Ismail
H. C. Bravo
Soheil Feizi
FAtt
255
104
0
29 Nov 2021
Explaining Time Series Predictions with Dynamic Masks
Explaining Time Series Predictions with Dynamic MasksInternational Conference on Machine Learning (ICML), 2021
Jonathan Crabbé
M. Schaar
FAttAI4TS
216
111
0
09 Jun 2021
Benchmarking Deep Learning Interpretability in Time Series Predictions
Benchmarking Deep Learning Interpretability in Time Series PredictionsNeural Information Processing Systems (NeurIPS), 2020
Aya Abdelsalam Ismail
Mohamed K. Gunady
H. C. Bravo
Soheil Feizi
XAIAI4TSFAtt
341
213
0
26 Oct 2020
Marginal Contribution Feature Importance -- an Axiomatic Approach for
  The Natural Case
Marginal Contribution Feature Importance -- an Axiomatic Approach for The Natural Case
Amnon Catav
Boyang Fu
J. Ernst
S. Sankararaman
Ran Gilad-Bachrach
FAtt
212
3
0
15 Oct 2020
Learning to Evaluate Perception Models Using Planner-Centric Metrics
Learning to Evaluate Perception Models Using Planner-Centric MetricsComputer Vision and Pattern Recognition (CVPR), 2020
Jonah Philion
Amlan Kar
Sanja Fidler
217
75
0
19 Apr 2020
1
Page 1 of 1