Kernel methods have been used widely in a number of tasks, but have had limited success in Natural Language Processing (NLP) due to high cost of computing kernel similarities between discrete natural language structures. A recently proposed technique, Kernelized Locality Sensitive Hashing (KLSH), can significantly reduce the computational cost, but is only applicable to classifiers operating on kNN graphs. Here we propose to use random subspaces of KLSH codes for efficiently constructing explicit representation of natural language structures suitable for general classification methods. Further, we propose an approach for optimizing a KLSH model for classification problems, by maximizing a variational lower bound on the mutual information between the KLSH codes (feature vectors) and the class labels. We apply the proposed approach to a biomedical information extraction task, and observe robust improvements in accuracy, along with significant speedup compared to conventional kernel methods.
View on arXiv