ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.04664
  4. Cited By
Why Language Models Hallucinate

Why Language Models Hallucinate

4 September 2025
Adam Tauman Kalai
Ofir Nachum
Santosh Vempala
Edwin Zhang
    HILMLRM
ArXiv (abs)PDFHTMLHuggingFace (137 upvotes)Github (76639★)

Papers citing "Why Language Models Hallucinate"

5 / 5 papers shown
Title
Cognitive Load Limits in Large Language Models: Benchmarking Multi-Hop Reasoning
Cognitive Load Limits in Large Language Models: Benchmarking Multi-Hop Reasoning
Sai Teja Reddy Adapala
LRMELM
24
0
0
23 Sep 2025
Disproving the Feasibility of Learned Confidence Calibration Under Binary Supervision: An Information-Theoretic Impossibility
Disproving the Feasibility of Learned Confidence Calibration Under Binary Supervision: An Information-Theoretic Impossibility
Arjun S. Nair
Kristina P. Sinaga
4
0
0
17 Sep 2025
Vibe Coding for UX Design: Understanding UX Professionals' Perceptions of AI-Assisted Design and Development
Vibe Coding for UX Design: Understanding UX Professionals' Perceptions of AI-Assisted Design and Development
Jie Li
Youyang Hou
Laura Lin
Ruihao Zhu
Hancheng Cao
Abdallah El Ali
0
0
0
12 Sep 2025
XML Prompting as Grammar-Constrained Interaction: Fixed-Point Semantics, Convergence Guarantees, and Human-AI Protocols
XML Prompting as Grammar-Constrained Interaction: Fixed-Point Semantics, Convergence Guarantees, and Human-AI Protocols
Faruk Alpay
Taylan Alpay
12
0
0
09 Sep 2025
Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification
Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification
Aivin V. Solatorio
8
0
0
08 Sep 2025
1