A Diagnostic Evaluation of Neural Networks Trained with the Error Diffusion Learning Algorithm
The Error Diffusion Learning Algorithm (EDLA) is a learning scheme that performs synaptically local weight updates driven by a single, globally defined error signal. Although originally proposed as an alternative to backpropagation, its behavior has not been systematically characterized. We provide a modern formulation and implementation of EDLA and evaluate multilayer perceptrons trained with EDLA on parity, regression, and image-classification benchmarks (Digits, MNIST, Fashion-MNIST, and CIFAR-10). Following the original formulation, multi-class classification is implemented by training independent single-output networks (one per class), which makes the computational cost scale linearly with the number of classes. Under comparable architectures and training protocols, EDLA consistently underperforms backpropagation-trained baselines on all benchmarks considered. Through an analysis of internal dynamics, we identify a depth-related failure mode in ReLU-based EDLA: activations can grow explosively, causing unstable training and degraded accuracy. To mitigate this instability, we incorporate root mean square normalization (RMSNorm) into EDLA training. RMSNorm substantially improves numerical stability and expands the depth range in which EDLA can be trained, but it does not close the accuracy gap and retains the overhead of the parallel-network implementation. Overall, we offer a diagnostic evaluation of where and why global error diffusion breaks down in deep networks, providing guidance for future development of local, biologically inspired learning rules.
View on arXiv