527

End-to-end deep learning for big data analytics under a quasi-open set assumption

Abstract

Neural network classifiers trained using end-to-end learning regimes are argued most viable for big data analytics due to their low system complexity, fast training and low computational cost. Generally, big data classification models are trained using a semi-supervised learning framework due to the available unlabelled samples and the high cost to gather labelled samples. We assume that unlabelled training samples in big data are from both the same and different classes to available labelled training samples which we call a quasi-open set. Under quasi-open set assumptions, end-to-end classifier models must accurately classify samples from source classes represented by labelled and unlabelled training samples while also detecting samples from novel classes represented by only unlabelled training samples. To the best of our knowledge, no end-to-end work has trained under a quasi-open set assumption making our results a first of its kind. Our proposed method extends the semi-supervised learning using GANs framework to also explicitly train a certainty classification measurement via end-to-end means. Different from other certainty measurements that aim to reduce misclassifications of source classes, ours aims to provide tractable means to separate source and novel classes. Experiments are conducted on a simulated quasi-open set using MNIST by selecting seven classes as source classes and using the remaining three classes as possible novel classes. For all experiments, we achieve near-perfect detection of samples from novel classes. On the other hand, source class classification is dependant on the number of labelled training samples provided for the source classes as per general end-to-end classification learning. End-to-end learning is held as the most tractable solution for big data analytics, but if only if models are trained to classify source classes and detect novel classes.

View on arXiv
Comments on this paper