MetaCheckGPT -- A Multi-task Hallucination Detection Using LLM
Uncertainty and Meta-models
- HILMKELM
Abstract
This paper presents our winning solution for the SemEval-2024 Task 6 competition. We propose a meta-regressor framework of large language models (LLMs) for model evaluation and integration that achieves the highest scores on the leader board. Our approach leverages uncertainty signals present in a diverse basket of LLMs to detect hallucinations more robustly.
View on arXivComments on this paper
