NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
Sewon Min
Jordan L. Boyd-Graber
Chris Alberti
Danqi Chen
Eunsol Choi
Michael Collins
Kelvin Guu
Hannaneh Hajishirzi
Kenton Lee
J. Palomaki
Colin Raffel
Adam Roberts
Tom Kwiatkowski
Patrick Lewis
Yuxiang Wu
Heinrich Küttler
Linqing Liu
Pasquale Minervini
Pontus Stenetorp
Sebastian Riedel
Sohee Yang
Minjoon Seo
Gautier Izacard
Fabio Petroni
Lucas Hosseini
Nicola De Cao
Edouard Grave
Ikuya Yamada
Sonse Shimaoka
Masatoshi Suzuki
Shumpei Miyawaki
Shun Sato
Ryo Takahashi
Jun Suzuki
Martin Fajcik
Martin Docekal
Karel Ondrej
Pavel Smrz
Hao Cheng
Yelong Shen
Xiaodong Liu
Pengcheng He
Weizhu Chen
Jianfeng Gao
Barlas Oğuz
Xilun Chen
Vladimir Karpukhin
Stanislav Peshterliev
Dmytro Okhonko
M. Schlichtkrull
Sonal Gupta
Yashar Mehdad
Wen-tau Yih

Abstract
We review the EfficientQA competition from NeurIPS 2020. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing retrieval corpora or the parameters of learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.
View on arXivComments on this paper