ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.09152
22
0

The Two Sides of the Coin: Hallucination Generation and Detection with LLMs as Evaluators for LLMs

12 July 2024
Anh Thu Maria Bui
Saskia Felizitas Brech
Natalie Hußfeldt
Tobias Jennert
Melanie Ullrich
Timo Breuer
Narjes Nikzad Khasmakhi
Philipp Schaer
    HILM
ArXivPDFHTML
Abstract

Hallucination detection in Large Language Models (LLMs) is crucial for ensuring their reliability. This work presents our participation in the CLEF ELOQUENT HalluciGen shared task, where the goal is to develop evaluators for both generating and detecting hallucinated content. We explored the capabilities of four LLMs: Llama 3, Gemma, GPT-3.5 Turbo, and GPT-4, for this purpose. We also employed ensemble majority voting to incorporate all four models for the detection task. The results provide valuable insights into the strengths and weaknesses of these LLMs in handling hallucination generation and detection tasks.

View on arXiv
Comments on this paper