31
v1v2 (latest)

Efficient Knowledge Probing of Large Language Models by Adapting Pre-trained Embeddings

Kartik Sharma
Yiqiao Jin
Rakshit Trivedi
Srijan Kumar
Main:10 Pages
6 Figures
Bibliography:5 Pages
13 Tables
Appendix:5 Pages
Abstract

Large language models (LLMs) acquire knowledge across diverse domains such as science, history, and geography encountered during generative pre-training. However, due to their stochasticity, it is difficult to predict what LLMs have acquired. Prior work has developed different ways to probe this knowledge by investigating the hidden representations, crafting specific task prompts, curating representative samples, and estimating their uncertainty. However, these methods require making forward passes through the underlying model to probe the LLM's knowledge about a specific fact, making them computationally expensive and time-consuming. To bridge this gap, we propose PEEK\textbf{PEEK} or P\textbf{P}roxy E\textbf{E}mbeddings to E\textbf{E}stimate K\textbf{K}nowledge of LLMs, by leveraging the pre-trained embedding models that effectively encode factual knowledge as text or graphs as proxies for LLMs. First, we identify a training set of facts known by LLMs through various probing strategies and then adapt embedding models to predict the LLM outputs with a linear decoder layer. Comprehensive evaluation on 33 Wikipedia-derived datasets, 44 LLMs, and 77 embedding models shows that embeddings can predict LLM knowledge on a held-out set with up to 90 % accuracy. Furthermore, we find that sentence embedding models are more suitable than graph embeddings to predict LLM knowledge, shedding light on the underlying representation of the factual landscape. Thus, we believe that knowledge-adapted embeddings can be used to identify knowledge gaps in LLMs at scale and can provide deeper insights into LLMs' internal inductive bias. The code and data are made available atthis https URL.

View on arXiv
Comments on this paper