72
0

From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models

Abstract

This paper explores hallucination phenomena in large language models (LLMs) through the lens of language philosophy and psychoanalysis. By incorporating Lacan's concepts of the "chain of signifiers" and "suture points," we propose the Anchor-RAG framework as a novel approach to mitigate hallucinations. In contrast to the predominant reliance on trial-and-error experiments, constant adjustments of mathematical formulas, or resource-intensive methods that emphasize quantity over quality, our approach returns to the fundamental principles of linguistics to analyze the root causes of hallucinations in LLMs. Drawing from robust theoretical foundations, we derive algorithms and models that are not only effective in reducing hallucinations but also enhance LLM performance and improve output quality. This paper seeks to establish a comprehensive theoretical framework for understanding hallucinations in LLMs and aims to challenge the prevalent "guess-and-test" approach and rat race mentality in the field. We aspire to pave the way for a new era of interpretable LLMs, offering deeper insights into the inner workings of language-based AI systems.

View on arXiv
@article{wang2025_2503.14392,
  title={ From "Hallucination" to "Suture": Insights from Language Philosophy to Enhance Large Language Models },
  author={ Qiantong Wang },
  journal={arXiv preprint arXiv:2503.14392},
  year={ 2025 }
}
Comments on this paper