ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.01792
38
34

A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks

2 February 2021
Atefeh Shahroudnejad
    FaML
    AAML
    AI4CE
    XAI
ArXivPDFHTML
Abstract

Recent advancements in machine learning and signal processing domains have resulted in an extensive surge of interest in Deep Neural Networks (DNNs) due to their unprecedented performance and high accuracy for different and challenging problems of significant engineering importance. However, when such deep learning architectures are utilized for making critical decisions such as the ones that involve human lives (e.g., in control systems and medical applications), it is of paramount importance to understand, trust, and in one word "explain" the argument behind deep models' decisions. In many applications, artificial neural networks (including DNNs) are considered as black-box systems, which do not provide sufficient clue on their internal processing actions. Although some recent efforts have been initiated to explain the behaviors and decisions of deep networks, explainable artificial intelligence (XAI) domain, which aims at reasoning about the behavior and decisions of DNNs, is still in its infancy. The aim of this paper is to provide a comprehensive overview on Understanding, Visualization, and Explanation of the internal and overall behavior of DNNs.

View on arXiv
Comments on this paper