In this study, we introduce a novel multi-task learning algorithm based on capsule network to encode visual attributes towards image-based diagnosis. By learning visual attributes, our proposed capsule architecture, called X-Caps, is considered explainable and models high-level visual attributes within the vectors of its capsules, then forms predictions based solely on these interpretable features. To accomplish this, we modify the dynamic routing algorithm to independently route information from child capsules to parents for the visual attribute vectors. To increase the explainability of our method further, we propose to train our network on a distribution of expert labels directly rather than the average of these labels as done in previous studies. At test time, this provides a meaningful metric of model confidence, punishing over/under confidence, directly supervised by human-experts' agreement, while visual attribute prediction scores are verified via a reconstruction branch of the network. To test and validate the proposed algorithm, we conduct experiments on a large dataset of over 1000 CT scans, where our proposed X-Caps, even being a relatively small 2D capsule network, outperforms the previous state-of-the-art deep dual-path dense 3D CNN in predicting visual attribute scores while also improving diagnostic accuracy. To the best of our knowledge, this is the first study to investigate capsule networks for making predictions based on radiologist-level interpretable attributes and its applications to medical image diagnosis.
View on arXiv