ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.07770
13
0

Had enough of experts? Quantitative knowledge retrieval from large language models

12 February 2024
David Selby
Kai Spriestersbach
Yuichiro Iwashita
Dennis Bappert
Archana Warrier
Sumantrak Mukherjee
M. Asim
Koichi Kise
Sebastian Vollmer
ArXivPDFHTML
Abstract

Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences, however their utility for quantitative information retrieval is less well understood. Here we explore the feasibility of LLMs as a mechanism for quantitative knowledge retrieval to aid two data analysis tasks: elicitation of prior distributions for Bayesian models and imputation of missing data. We introduce a framework that leverages LLMs to enhance Bayesian workflows by eliciting expert-like prior knowledge and imputing missing data. Tested on diverse datasets, this approach can improve predictive accuracy and reduce data requirements, offering significant potential in healthcare, environmental science and engineering applications. We discuss the implications and challenges of treating LLMs as éxperts'.

View on arXiv
@article{selby2025_2402.07770,
  title={ Had enough of experts? Quantitative knowledge retrieval from large language models },
  author={ David Selby and Kai Spriestersbach and Yuichiro Iwashita and Mohammad Saad and Dennis Bappert and Archana Warrier and Sumantrak Mukherjee and Koichi Kise and Sebastian Vollmer },
  journal={arXiv preprint arXiv:2402.07770},
  year={ 2025 }
}
Comments on this paper