Less is More: The Influence of Pruning on the Explainability of CNNs

Over the last century, deep learning models have become the state-of-the-art for solving complex computer vision problems. These modern computer vision models have millions of parameters, which presents two major challenges: (1) the increased computational requirements hamper the deployment in resource-constrained environments, such as mobile or IoT devices, and (2) explaining the complex decisions of such networks to humans is challenging. Network pruning is a technical approach to reduce the complexity of models, where less important parameters are removed. The work presented in this paper investigates whether this reduction in technical complexity also helps with perceived explainability. To do so, we conducted a pre-study and two human-grounded experiments, assessing the effects of different pruning ratios on explainability. Overall, we evaluate four different compression rates (i.e., 2, 4, 8, and 32) with 37 500 tasks on Mechanical Turk. Results indicate that lower compression rates have a positive influence on explainability, while higher compression rates show negative effects. Furthermore, we were able to identify sweet spots that increase both the perceived explainability and the model's performance.
View on arXiv