24
5

A Generative Model of Words and Relationships from Multiple Sources

Abstract

Neural Language Models are a powerful tool to meaningfully embed words into semantic vector spaces. However, learning vector space models of language generally relies on the availability of abundant and diverse training examples. In highly specialized domains this requirement may not be met due to difficulties in obtaining a large corpus, or the limited range of expression in average usage. Prior knowledge about entities in the language often exists in a knowledge base or ontology. We propose a generative model which allows for modeling and transfering semantic information in vector spaces by combining diverse data sources. We generalize the concept of co-occurrence from distributional semantics to include other types of relations between entities, evidence for which can come from a knowledge base (such as WordNet or UMLS). Our model defines a probability distribution over triplets consisting of word pairs with relations. Through stochastic maximum likelihood we learn a representation of these words as elements of a vector space and model the relations as affine transformations. We demonstrate the effectiveness of our generative approach by outperforming recent models on a knowledge-base completion task and demonstrating its ability to profit from the use of partially observed or fully unobserved data entries. Our model is capable of operating semi-supervised, where word pairs with no known relation are used as training data. We further demonstrate the usefulness of learning from different data sources with overlapping vocabularies.

View on arXiv
Comments on this paper