298

A New Fuzzy Stacked Generalization Technique and Analysis of its Performance

Abstract

In this study, we suggest two hypotheses which establish the conditions for feature selection and instance selection problems to boost the performance of the base-layer classifiers in a Stacked Generalization architecture. Based upon these hypotheses, we suggest a robust Fuzzy Stacked Generalization (FSG) technique, which assures a better performance than that of the individual classifiers. The proposed FSG ensembles a set of fuzzy classifiers each of which receives a different feature set extracted from the same sample set. The fuzzy membership values at the output of each classifier are concatenated to form the feature vectors of the decision space. Finally, the vectors in the decision space are fed to a meta-layer classifier to learn the degree of accuracy of the decisions of the base-layer classifiers. We make a thorough analysis to investigate the learning mechanism of this architecture and evaluate its performance. We show that the success of the FSG highly depends on how the individual classifiers share to learn the samples which are represented by a different feature vector in their own feature spaces. Rather than the performance of the individual base-layer classifiers, diversity and cooperation of the classifiers become an important issue to improve the overall performance of the proposed FSG. A weak classifier may boost the overall perfor-mance more than a strong classifier, if it is capable of recognizing the samples, which are not recognized by the rest of the classifiers, in its own feature space. Therefore, the problem of designing Stacked Generalization architecture reduces to the design of the feature spaces for the base-layer classifiers. The experiments explore the type of the collaboration among the individual classifiers, required for an improved performance of the suggested FSG architecture.

View on arXiv
Comments on this paper