Critical Points Of An Autoencoder Can Provably Recover Sparsely Used
Overcomplete Dictionaries
In "Dictionary Learning" one is trying to recover incoherent matrices (typically overcomplete and whose columns are assumed to be normalized) and sparse vectors with a small support of size for some while being given access to observations where . In this work we undertake a rigorous analysis of the possibility that dictionary learning could be performed by gradient descent on "Autoencoders", which are neural network with a single ReLU activation layer of size . Towards the above objective we propose a new autoencoder loss function which modifies the squared loss error term and also adds new regularization terms. We create a proxy for the expected gradient of this loss function which we motivate with high probability arguments, under natural distributional assumptions on the sparse code . Under the same distributional assumptions on , we show that, in the limit of large enough sparse code dimension, any zero point of our proxy for the expected gradient of the loss function within a certain radius of corresponds to dictionaries whose action on the sparse vectors is indistinguishable from that of . We also report simulations on synthetic data in support of our theory.
View on arXiv