ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17755
25
0

Improving Preference Extraction In LLMs By Identifying Latent Knowledge Through Classifying Probes

22 March 2025
Sharan Maiya
Yinhong Liu
Ramit Debnath
Anna Korhonen
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are often used as automated judges to evaluate text, but their effectiveness can be hindered by various unintentional biases. We propose using linear classifying probes, trained by leveraging differences between contrasting pairs of prompts, to directly access LLMs' latent knowledge and extract more accurate preferences. Through extensive experiments using models of varying size from four different families and six diverse datasets assessing text quality evaluation and common sense reasoning, we demonstrate that both supervised and unsupervised probing approaches consistently outperform traditional generation-based judgement while maintaining similar computational costs. These probes generalise under domain shifts and can even outperform finetuned evaluators with the same training data size. Our results suggest linear probing offers an accurate, robust and computationally efficient approach for LLM-as-judge tasks while providing interpretable insights into how models encode judgement-relevant knowledge. Our data and code will be openly released in the future.

View on arXiv
@article{maiya2025_2503.17755,
  title={ Improving Preference Extraction In LLMs By Identifying Latent Knowledge Through Classifying Probes },
  author={ Sharan Maiya and Yinhong Liu and Ramit Debnath and Anna Korhonen },
  journal={arXiv preprint arXiv:2503.17755},
  year={ 2025 }
}
Comments on this paper