34
0

Path Analysis for Effective Fault Localization in Deep Neural Networks

Abstract

Deep learning has revolutionized numerous fields, yet the reliability of Deep Neural Networks (DNNs) remains a concern due to their complexity and data dependency. Traditional software fault localization methods, such as Spectrum-based Fault Localization (SBFL), have been adapted for DNNs but often fall short in effectiveness. These methods typically overlook the propagation of faults through neural pathways, resulting in less precise fault detection. Research indicates that examining neural pathways, rather than individual neurons, is crucial because issues in one neuron can affect its entire pathway. By investigating these interconnected pathways, we can better identify and address problems arising from the collective activity of neurons. To address this limitation, we introduce the NP-SBFL method, which leverages Layer-wise Relevance Propagation (LRP) to identify essential faulty neural pathways. Our method explores multiple fault sources to accurately pinpoint faulty neurons by analyzing their interconnections. Additionally, our multi-stage gradient ascent (MGA) technique, an extension of gradient ascent (GA), enables sequential neuron activation to enhance fault detection. We evaluated NP-SBFL-MGA on the well-established MNIST and CIFAR-10 datasets, comparing it to other methods like DeepFault and NP-SBFL-GA, as well as three neuron measures: Tarantula, Ochiai, and Barinel. Our evaluation utilized all training and test samples (60,000 for MNIST and 50,000 for CIFAR-10) and revealed that NP-SBFL-MGA significantly outperformed the baselines in identifying suspicious pathways and generating adversarial inputs. Notably, Tarantula with NP-SBFL-MGA achieved a remarkable 96.75% fault detection rate compared to DeepFault's 89.90%. NP-SBFL-MGA highlights a strong correlation between critical path coverage and the number of failed tests in DNN fault localization.

View on arXiv
Comments on this paper