19
0

Verbalized Probabilistic Graphical Modeling

Hengguan Huang
Xing Shen
Songtao Wang
Dianbo Liu
Hao Wang
Hao Wang
Samir Bhatt
Abstract

Human cognition excels at transcending sensory input and forming latent representations that structure our understanding of the world. Although Large Language Models (LLMs) can produce chain-of-thought reasoning, they lack a principled framework to capture latent structures and model uncertainty, especially in compositional reasoning tasks. We propose Verbalized Probabilistic Graphical Modeling (vPGM), a Bayesian prompting framework that guides LLMs to simulate key principles of Probabilistic Graphical Models (PGMs) in natural language. Unlike many traditional probabilistic methods requiring substantial domain expertise or specialized training, vPGM bypasses expert-driven model design, making it well-suited for scenarios with limited assumptions or scarce data. We evaluated our model on several compositional reasoning tasks, both close-ended and open-ended. Our results indicate that the model effectively enhances confidence calibration and text generation quality.

View on arXiv
@article{huang2025_2406.05516,
  title={ Verbalized Probabilistic Graphical Modeling },
  author={ Hengguan Huang and Xing Shen and Songtao Wang and Lingfa Meng and Dianbo Liu and Hao Wang and Samir Bhatt },
  journal={arXiv preprint arXiv:2406.05516},
  year={ 2025 }
}
Comments on this paper