ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11092
11
40

A Rate-Distortion Framework for Explaining Neural Network Decisions

27 May 2019
Jan Macdonald
S. Wäldchen
Sascha Hauch
Gitta Kutyniok
ArXivPDFHTML
Abstract

We formalise the widespread idea of interpreting neural network decisions as an explicit optimisation problem in a rate-distortion framework. A set of input features is deemed relevant for a classification decision if the expected classifier score remains nearly constant when randomising the remaining features. We discuss the computational complexity of finding small sets of relevant features and show that the problem is complete for NPPP\mathsf{NP}^\mathsf{PP}NPPP, an important class of computational problems frequently arising in AI tasks. Furthermore, we show that it even remains NP\mathsf{NP}NP-hard to only approximate the optimal solution to within any non-trivial approximation factor. Finally, we consider a continuous problem relaxation and develop a heuristic solution strategy based on assumed density filtering for deep ReLU neural networks. We present numerical experiments for two image classification data sets where we outperform established methods in particular for sparse explanations of neural network decisions.

View on arXiv
Comments on this paper