ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.01219
25
507

Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models

3 September 2023
Yue Zhang
Yafu Li
Leyang Cui
Deng Cai
Lemao Liu
Tingchen Fu
Xinting Huang
Enbo Zhao
Yu Zhang
Yulong Chen
Longyue Wang
A. Luu
Wei Bi
Freda Shi
Shuming Shi
    RALM
    LRM
    HILM
ArXivPDFHTML
Abstract

While large language models (LLMs) have demonstrated remarkable capabilities across a range of downstream tasks, a significant concern revolves around their propensity to exhibit hallucinations: LLMs occasionally generate content that diverges from the user input, contradicts previously generated context, or misaligns with established world knowledge. This phenomenon poses a substantial challenge to the reliability of LLMs in real-world scenarios. In this paper, we survey recent efforts on the detection, explanation, and mitigation of hallucination, with an emphasis on the unique challenges posed by LLMs. We present taxonomies of the LLM hallucination phenomena and evaluation benchmarks, analyze existing approaches aiming at mitigating LLM hallucination, and discuss potential directions for future research.

View on arXiv
Comments on this paper