276

Linguistic Universals: Language-independent semantic fingerprints

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019
Abstract

Semantic processing is central to our understanding of natural languages, ensuring accuracy in monolingual communications, and minimizing losses in cross-lingual translations. The mechanism of semantics is a less-charted territory, unlike phonology, morphology, syntax, among other aspects of human languages. Data-hungry algorithms in machine learning achieve impressive success in some tasks of document comprehension, through high-dimensional numerical representations of words and phrases. Such computationally taxing algorithms are far from the efficient mechanism by which we humans understand texts and acquire knowledge. Here we advance a cost-effective model that assigns language-independent semantic fingerprints to words in a particular document, without consulting external knowledge-base or thesaurus. Our universal semantic fingerprints quantify local meaning of words in 14 representative languages across 5 major language families. Instead of embedding words into very high dimensional spaces, our method represents each concept by a few dozen parameters, interpretable as algebraic invariants in succinct statistical operations. Concise and transparent, our semantic fingerprints numerically characterise connectivity and association of individual concepts, even with scant input of data. These semantic representations enable a robot reader to both understand short texts in a given language (automated question-answering) and match medium-length texts across different languages (automated word translation).

View on arXiv
Comments on this paper