579

How Much Knowledge Can You Pack Into the Parameters of a Language Model?

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Abstract

It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales surprisingly well with model size and outperforms models that explicitly look up knowledge on the open-domain variants of Natural Questions and WebQuestions.

View on arXiv
Comments on this paper