497

Probing Contextual Language Models for Common Ground with Visual Representations

Abstract

While large-scale contextual language models have enjoyed great success recently, much remains to be understood about what is encoded in their representations. In this work, we characterize how contextual representations of concrete nouns extracted by trained language models relate to the physical properties of the objects they refer to. Our approach uses a probing model that examines how effective these language representations are in discerning between different visual representations. We show that many recent language models yield representations that are useful in retrieving semantically aligned image patches, and explore the role of context in this process. Much weaker results are found in control experiments, attesting the selectivity of the probe. All examined models greatly under-perform humans in retrieval, highlighting substantial room for future progress. Altogether, our findings shed new empirical insights on language grounding and its materialization in contextual language models.

View on arXiv
Comments on this paper