ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.03724
12
1

Simplifying the explanation of deep neural networks with sufficient and necessary feature-sets: case of text classification

8 October 2020
Florentin Flambeau Jiechieu Kameni
Norbert Tsopzé
    XAI
    FAtt
    MedIm
ArXivPDFHTML
Abstract

During the last decade, deep neural networks (DNN) have demonstrated impressive performances solving a wide range of problems in various domains such as medicine, finance, law, etc. Despite their great performances, they have long been considered as black-box systems, providing good results without being able to explain them. However, the inability to explain a system decision presents a serious risk in critical domains such as medicine where people's lives are at stake. Several works have been done to uncover the inner reasoning of deep neural networks. Saliency methods explain model decisions by assigning weights to input features that reflect their contribution to the classifier decision. However, not all features are necessary to explain a model decision. In practice, classifiers might strongly rely on a subset of features that might be sufficient to explain a particular decision. The aim of this article is to propose a method to simplify the prediction explanation of One-Dimensional (1D) Convolutional Neural Networks (CNN) by identifying sufficient and necessary features-sets. We also propose an adaptation of Layer-wise Relevance Propagation for 1D-CNN. Experiments carried out on multiple datasets show that the distribution of relevance among features is similar to that obtained with a well known state of the art model. Moreover, the sufficient and necessary features extracted perceptually appear convincing to humans.

View on arXiv
Comments on this paper