Extending the Relative Seriality Formalism for Interpretable Deep Learning of Normal Tissue Complication Probability Models

Abstract
We formally demonstrate that the relative seriality model of Kallman, et al. maps exactly onto a simple type of convolutional neural network. This approach leads to a natural interpretation of feedforward connections in the convolutional layer and stacked intermediate pooling layers in terms of bystander effects and hierarchical tissue organization, respectively. These results serve as proof-of-principle for radiobiologically interpretable deep learning of normal tissue complication probability using large-scale imaging and dosimetry datasets.
View on arXivComments on this paper