Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks
- NAIAI4CELRM

Our goal is to combine the rich multi-step inference of symbolic logical reasoning together with the generalization capabilities of vector embeddings and neural networks. We are particularly interested in complex reasoning about the entities and relations in knowledge bases. Recently Neelakantan et al. (2015) presented a compelling methodology using recurrent neural networks (RNNs) to compose the meaning of relations in a Horn clause consisting of a connected chain. However, this work has multiple weaknesses: it accounts for relations but not entities; it limits generalization by training many separate models; it does not combine evidence over multiple paths. In this paper we address all these weaknesses, making key strides towards our goal of rich logical reasoning with neural networks: our RNN leverages and jointly trains both relation and entity type embeddings, we train a single high-capacity RNN to compose Horn clause chains across all predicted relation types; we demonstrate that pooling evidence across multiple chains can dramatically improve both speed of training and final accuracy. We also explore multi-task training of entity and relation types. On a large dataset from ClueWeb and Freebase our approach provides a significant increase in mean average precision from 55.3% to 73.2%
View on arXiv