Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2510.24476
Cited By
Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems
28 October 2025
Yihan Li
Xiyuan Fu
Ghanshyam Verma
P. Buitelaar
Mingming Liu
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems"
0 / 0 papers shown
Title
No papers found