697

A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI

IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2019
Abstract

Recently, artificial intelligence, especially machine learning has demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning. Along with research progress, machine learning has encroached into many different fields and disciplines. Some of them, such as the medical field, require high level of accountability, and thus transparency, which means we need to be able to explain machine decisions, predictions and justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the black-box nature of the deep learning is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. Also, within an exhaustive list of papers, we find that interpretability is often algorithm-centric, with few human-subject tests to verify whether proposed methods indeed enhance human interpretability. We explore further into interpretability in the medical field, illustrating the complexity of interpretability issue.

View on arXiv
Comments on this paper