323

Neural Simpletrons - Minimalistic Probabilistic Networks for Learning With Few Labels

Abstract

Deep learning is intensively studied from both the perspectives of unsupervised and supervised learning approaches. The combination of the two learning schemes is typically done using separate algorithms, often resulting in complex and heterogeneous systems that are equipped with large numbers of tunable parameters. In this work we study the potential of a tighter integration of unsupervised and supervised learning, and empirically highlight the model complexity vs. performance tradeoff. We aim at defining the most minimalistic system that learns from labeled and unlabeled data. First we derive neural update and learning rules based on an hierarchical Poisson mixture model for classification. The network is then scaled using standard deep learning techniques. We use the task of learning from data with few labels as a natural task in between unsupervised and supervised approaches. In quantitative evaluations on standard benchmarks, we find that a tighter integration of unsupervised and supervised learning results in very competitive performance for the minimalistic approach studied here. Furthermore, the used monolithic learning is amongst the least complex of all competitive approaches (lowest numbers of tunable parameters). Finally, in the limit of very few labels, we demonstrate applicability where other competitive systems have not been reported to operate, so far. In general, our study argues in favor of a stronger focus and integration of unsupervised learning in order to simplify and improve the capabilities of current deep learning approaches.

View on arXiv
Comments on this paper