Recursive Neural Networks Can Learn Logical Semantics
- NAI

Recursive neural networks (RNNs) for sentence meaning have been successful for many tasks, but it remains an open question whether they can learn compositional semantic representations that support logical deduction. We pursue this question by evaluating whether two such models---plain RNNs and recursive neural tensor networks (RNTNs)---can correctly learn to identify logical relationships such as entailment and contradiction. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language.
View on arXiv