Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences, however their utility for quantitative information retrieval is less well understood. Here we explore the feasibility of LLMs as a mechanism for quantitative knowledge retrieval to aid two data analysis tasks: elicitation of prior distributions for Bayesian models and imputation of missing data. We introduce a framework that leverages LLMs to enhance Bayesian workflows by eliciting expert-like prior knowledge and imputing missing data. Tested on diverse datasets, this approach can improve predictive accuracy and reduce data requirements, offering significant potential in healthcare, environmental science and engineering applications. We discuss the implications and challenges of treating LLMs as éxperts'.
View on arXiv@article{selby2025_2402.07770, title={ Had enough of experts? Quantitative knowledge retrieval from large language models }, author={ David Selby and Kai Spriestersbach and Yuichiro Iwashita and Mohammad Saad and Dennis Bappert and Archana Warrier and Sumantrak Mukherjee and Koichi Kise and Sebastian Vollmer }, journal={arXiv preprint arXiv:2402.07770}, year={ 2025 } }