Nonlinear dimensionality reduction of data by multilayer bootstrap networks

Learning invariant semantic concepts from highly-variant high-dimensional data is important in broad branches of science. If we encode the data and concepts compactly by for example 0 and 1, the golden pathway is a problem of reducing high-dimensional codes to low-dimensional representations. Dimensionality reduction, as a central problem of statistics and machine learning, has been intensively studied, where classification and clustering are two special cases of dimensionality reduction that reduce high-dimensional data to discrete points. Bootstrap resampling is a simple and fundamental method in statistics, and has achieved a great success in supervised dimensionality reduction. However, bootstrap based unsupervised dimensionality reduction methods were still not powerful compared to kernel methods and neural networks. Here we describe a simple multilayer bootstrap network for unsupervised dimensionality reduction that each layer of the network is a group of mutually independent k-centers clusterings, and the centers are only randomly sampled data points. We find that the described simple method outperformed 7 well-known unsupervised dimensionality reduction methods on both very small-scale biomedical data and large-scale image and document data with much less training time than multilayer neural networks on large-scale data. Our findings generalized bootstrap to unsupervised dimensionality reduction successfully and enriched the family of bootstrap methods. Furthermore, given the broad use of simple methods, the described method, which may be easily understood without domain knowledge, can find its applications in many branches of science.
View on arXiv