Explanation Space: A New Perspective into Time Series Interpretability

Human understandable explanation of deep learning models is essential for various critical and sensitive applications. Unlike image or tabular data where the importance of each input feature (for the classifier's decision) can be directly projected into the input, time series distinguishable features (e.g. dominant frequency) are often hard to manifest in time domain for a user to easily understand. Additionally, most explanation methods require a baseline value as an indication of the absence of any feature. However, the notion of lack of feature, which is often defined as black pixels for vision tasks or zero/mean values for tabular data, is not well-defined in time series. Despite the adoption of explainable AI methods (XAI) from tabular and vision domain into time series domain, these differences limit the application of these XAI methods in practice. In this paper, we propose a simple yet effective method that allows a model originally trained on the time domain to be interpreted in other explanation spaces using existing methods. We suggest five explanation spaces, each of which can potentially alleviate these issues in certain types of time series. Our method can be easily integrated into existing platforms without any changes to trained models or XAI methods. The code will be released upon acceptance.
View on arXiv@article{rezaei2025_2409.01354, title={ Explanation Space: A New Perspective into Time Series Interpretability }, author={ Shahbaz Rezaei and Xin Liu }, journal={arXiv preprint arXiv:2409.01354}, year={ 2025 } }