PROPHET: An Inferable Future Forecasting Benchmark with Causal Intervened Likelihood Estimation

Predicting future events stands as one of the ultimate aspirations of artificial intelligence. Recent advances in large language model (LLM)-based systems have shown remarkable potential in forecasting future events, thereby garnering significant interest in the research community. Currently, several benchmarks have been established to evaluate the forecasting capabilities by formalizing the event prediction as a retrieval-augmented generation (RAG) and reasoning task. In these benchmarks, each prediction question is answered with relevant retrieved news articles. However, because there is no consideration on whether the questions can be supported by valid or sufficient supporting rationales, some of the questions in these benchmarks may be inherently noninferable. To address this issue, we introduce a new benchmark, PROPHET, which comprises inferable forecasting questions paired with relevant news for retrieval. To ensure the inferability of the benchmark, we propose Causal Intervened Likelihood (CIL), a statistical measure that assesses inferability through causal inference. In constructing this benchmark, we first collected recent trend forecasting questions and then filtered the data using CIL, resulting in an inferable benchmark for event prediction. Through extensive experiments, we first demonstrate the validity of CIL and in-depth investigations into event prediction with the aid of CIL. Subsequently, we evaluate several representative prediction systems on PROPHET, drawing valuable insights for future directions.
View on arXiv@article{tao2025_2504.01509, title={ PROPHET: An Inferable Future Forecasting Benchmark with Causal Intervened Likelihood Estimation }, author={ Zhengwei Tao and Zhi Jin and Bincheng Li and Xiaoying Bai and Haiyan Zhao and Chengfeng Dou and Xiancai Chen and Jia Li and Linyu Li and Chongyang Tao }, journal={arXiv preprint arXiv:2504.01509}, year={ 2025 } }