16

Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs

Main:8 Pages
5 Figures
Bibliography:5 Pages
2 Tables
Appendix:2 Pages
Abstract

How does retrieval performance scale with pretraining FLOPs? We benchmark retrieval performance across LLM model sizes from 125 million parameters to 7 billion parameters pretrained on datasets ranging from 1 billion tokens to more than 2 trillion tokens. We find that retrieval performance on zero-shot BEIR tasks predictably scales with LLM size, training duration, and estimated FLOPs. We also show that In-Context Learning scores are strongly correlated with retrieval scores across retrieval tasks. Finally, we highlight the implications this has for the development of LLM-based retrievers.

View on arXiv
Comments on this paper