183

Nonnegative autoencoder with simplified random neural network

Abstract

This paper proposes new nonnegative (shallow and multi-layer) autoencoders by combining the model of spiking Random Neural Network (RNN), the network architecture in the deep-learning area and the training technique in the nonnegative matrix factorization (NMF) area. The shallow autoencoder is a simplified RNN model, which is then stacked into a multi-layer architecture. The learning algorithms are based on the weight update rules in the NMF area, subjecting to the nonnegative and probability constraints in a RNN model. The autoencoders equipped with the learning algorithms are tested on both typical image datasets including the MNIST, Yale face and CIFAR-10 datesets and 16 real-world datasets from different areas, and the results verify its efficacy. Simulation results of the stochastic spiking behaviors of this RNN autoencoder demonstrate that it can be implemented in a highly-distributed manner.

View on arXiv
Comments on this paper