Negated LAMA: Birds cannot fly
Annual Meeting of the Association for Computational Linguistics (ACL), 2019

Abstract
Pretrained language models have achieved remarkable improvements in a broad range of natural language processing tasks, including question answering (QA). To analyze pretrained language model performance on QA, we extend the LAMA (Petroni et al., 2019) evaluation framework by a component that is focused on negation. We find that pretrained language models are equally prone to generate facts ("birds can fly") and their negation ("birds cannot fly"). This casts doubt on the claim that pretrained language models have adequately learned factual knowledge.
View on arXivComments on this paper
