256
v1v2v3 (latest)

Evaluating LLMs on Real-World Forecasting Against Expert Forecasters

Main:12 Pages
6 Figures
Bibliography:2 Pages
18 Tables
Appendix:8 Pages
Abstract

Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks, but their ability to forecast future events remains understudied. A year ago, large language models struggle to come close to the accuracy of a human crowd. I evaluate state-of-the-art LLMs on 464 forecasting questions from Metaculus, comparing their performance against top forecasters. Frontier models achieve Brier scores that ostensibly surpass the human crowd but still significantly underperform a group of experts.

View on arXiv
Comments on this paper