ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.02522
487
387
v1v2v3v4 (latest)

On Interpretability of Artificial Neural Networks

IEEE Transactions on Radiation and Plasma Medical Sciences (TRPMS), 2020
8 January 2020
Fenglei Fan
Jinjun Xiong
Mengzhou Li
    AAMLAI4CE
ArXiv (abs)PDFHTML
Abstract

Deep learning has achieved great successes in many important areas to dealing with text, images, video, graphs, and so on. However, the black-box nature of deep artificial neural networks has become the primary obstacle to their public acceptance and wide popularity in critical applications such as diagnosis and therapy. Due to the huge potential of deep learning, interpreting neural networks has become one of the most critical research directions. In this paper, we systematically review recent studies in understanding the mechanism of neural networks and shed light on some future directions of interpretability research (This work is still in progress).

View on arXiv
Comments on this paper