ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.05746
  4. Cited By
LLMs Will Always Hallucinate, and We Need to Live With This

LLMs Will Always Hallucinate, and We Need to Live With This

9 September 2024
Sourav Banerjee
Ayushi Agarwal
Saloni Singla
    HILMLRM
ArXiv (abs)PDFHTMLHuggingFace (5 upvotes)

Papers citing "LLMs Will Always Hallucinate, and We Need to Live With This"

28 / 28 papers shown
A Concise Review of Hallucinations in LLMs and their Mitigation
A Concise Review of Hallucinations in LLMs and their Mitigation
Parth Pulkundwar
Vivek Dhanawade
Rohit Yadav
Minal Sonkar
Medha Asurlekar
Sarita Rathod
HILM
126
0
0
02 Dec 2025
The Oracle and The Prism: A Decoupled and Efficient Framework for Generative Recommendation Explanation
The Oracle and The Prism: A Decoupled and Efficient Framework for Generative Recommendation Explanation
Jiaheng Zhang
Daqiang Zhang
239
0
0
20 Nov 2025
Flash-Fusion: Enabling Expressive, Low-Latency Queries on IoT Sensor Streams with LLMs
Flash-Fusion: Enabling Expressive, Low-Latency Queries on IoT Sensor Streams with LLMs
Kausar Patherya
Ashutosh Dhekne
Francisco Romero
63
0
0
14 Nov 2025
Maestro: Learning to Collaborate via Conditional Listwise Policy Optimization for Multi-Agent LLMs
Maestro: Learning to Collaborate via Conditional Listwise Policy Optimization for Multi-Agent LLMsISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS Annals), 2025
Wei Yang
Jiacheng Pang
Shixuan Li
P. Bogdan
Stephen Tu
Jesse Thomason
LLMAG
396
1
0
08 Nov 2025
HACK: Hallucinations Along Certainty and Knowledge Axes
HACK: Hallucinations Along Certainty and Knowledge Axes
Adi Simhi
Jonathan Herzig
Itay Itzhak
Dana Arad
Zorik Gekhman
Roi Reichart
Fazl Barez
Gabriel Stanovsky
Idan Szpektor
Yonatan Belinkov
190
1
0
28 Oct 2025
Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems
Mitigating Hallucination in Large Language Models (LLMs): An Application-Oriented Survey on RAG, Reasoning, and Agentic Systems
Yihan Li
Xiyuan Fu
Ghanshyam Verma
P. Buitelaar
Mingming Liu
LRM
191
1
0
28 Oct 2025
KoSimpleQA: A Korean Factuality Benchmark with an Analysis of Reasoning LLMs
KoSimpleQA: A Korean Factuality Benchmark with an Analysis of Reasoning LLMs
Donghyeon Ko
Yeguk Jin
Kyubyung Chae
Byungwook Lee
Chansong Jo
Sookyo In
Jaehong Lee
Taesup Kim
Donghyun Kwak
HILM
149
1
0
21 Oct 2025
MOSAIC: Multi-agent Orchestration for Task-Intelligent Scientific Coding
MOSAIC: Multi-agent Orchestration for Task-Intelligent Scientific Coding
Siddeshwar Raghavan
Tanwi Mallick
AI4CE
139
0
0
09 Oct 2025
Large Language Models Hallucination: A Comprehensive Survey
Large Language Models Hallucination: A Comprehensive Survey
Aisha Alansari
Hamzah Luqman
HILMLRM
461
1
0
05 Oct 2025
Hallucination is Inevitable for LLMs with the Open World Assumption
Hallucination is Inevitable for LLMs with the Open World Assumption
Bowen Xu
LRM
104
0
0
29 Sep 2025
Are Hallucinations Bad Estimations?
Are Hallucinations Bad Estimations?
Hude Liu
Jerry Yao-Chieh Hu
Jennifer Yuntong Zhang
Zhao Song
Han Liu
HILM
161
0
0
25 Sep 2025
Reward Evolution with Graph-of-Thoughts: A Bi-Level Language Model Framework for Reinforcement Learning
Reward Evolution with Graph-of-Thoughts: A Bi-Level Language Model Framework for Reinforcement Learning
Changwei Yao
Xinzi Liu
Chen Li
Marios Savvides
LM&RoLRM
167
0
0
19 Sep 2025
Deploying AI for Signal Processing education: Selected challenges and intriguing opportunities
Deploying AI for Signal Processing education: Selected challenges and intriguing opportunities
Jarvis Haupt
Qin Lu
Yanning Shen
Jia Chen
Yue Dong
Dan McCreary
Mehmet Akçakaya
G. Giannakis
160
0
0
10 Sep 2025
Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification
Proof-Carrying Numbers (PCN): A Protocol for Trustworthy Numeric Answers from LLMs via Claim Verification
Aivin V. Solatorio
163
0
0
08 Sep 2025
Charting the Future of Scholarly Knowledge with AI: A Community Perspective
Charting the Future of Scholarly Knowledge with AI: A Community Perspective
Azanzi Jiomekong
Hande Küçük McGinty
Keith G. Mills
A. Oelen
Enayat Rajabi
...
Anmol Saini
Janice Anta Zebaze
Hannah Kim
Anna M. Jacyszyn
Sören Auer
148
1
0
27 Aug 2025
Mitigating Hallucinations in Large Language Models via Causal Reasoning
Mitigating Hallucinations in Large Language Models via Causal Reasoning
Yuangang Li
Yiqing Shen
Yi Nian
Jiechao Gao
Ziyi Wang
Chenxiao Yu
Shawn Li
Jie Wang
Xiyang Hu
Yue Zhao
HILMLRM
203
5
0
17 Aug 2025
RvLLM: LLM Runtime Verification with Domain Knowledge
RvLLM: LLM Runtime Verification with Domain Knowledge
Yedi Zhang
Sun Yi Emma
Annabelle Lee Jia En
Jin Song Dong
297
3
0
24 May 2025
Towards medical AI misalignment: a preliminary study
Towards medical AI misalignment: a preliminary study
Barbara Puccio
Federico Castagna
Allan Tucker
Pierangelo Veltri
204
0
0
22 May 2025
Osiris: A Lightweight Open-Source Hallucination Detection System
Osiris: A Lightweight Open-Source Hallucination Detection System
Alex Shan
John Bauer
Christopher D. Manning
HILMVLM
390
0
0
07 May 2025
Hallucination, reliability, and the role of generative AI in science
Hallucination, reliability, and the role of generative AI in science
Charles Rathkopf
HILM
336
6
0
11 Apr 2025
Logic-RAG: Augmenting Large Multimodal Models with Visual-Spatial Knowledge for Road Scene Understanding
Logic-RAG: Augmenting Large Multimodal Models with Visual-Spatial Knowledge for Road Scene UnderstandingIEEE International Conference on Robotics and Automation (ICRA), 2025
Imran Kabir
Md. Alimoor Reza
Syed Masum Billah
ReLMVLMLRM
288
3
0
16 Mar 2025
Grandes modelos de lenguaje: de la predicción de palabras a la comprensión?
Grandes modelos de lenguaje: de la predicción de palabras a la comprensión?
Carlos Gómez-Rodríguez
SyDaAILawELMVLM
792
0
0
25 Feb 2025
`Generalization is hallucination' through the lens of tensor completions
`Generalization is hallucination' through the lens of tensor completions
Liang Ze Wong
VLM
248
1
0
24 Feb 2025
Evaluating Step-by-step Reasoning Traces: A Survey
Evaluating Step-by-step Reasoning Traces: A Survey
Jinu Lee
Anjali Narayan-Chen
LRMELM
528
23
0
17 Feb 2025
Valuable Hallucinations: Realizable Non-realistic Propositions
Valuable Hallucinations: Realizable Non-realistic Propositions
Qiucheng Chen
Bo Wang
LRM
307
2
0
16 Feb 2025
CondAmbigQA: A Benchmark and Dataset for Conditional Ambiguous Question Answering
CondAmbigQA: A Benchmark and Dataset for Conditional Ambiguous Question Answering
Zongxi Li
Jian Wang
Haoran Xie
S. J. Qin
425
3
0
03 Feb 2025
Large Language Models as Common-Sense Heuristics
Large Language Models as Common-Sense Heuristics
Andrey Borro
Patricia J. Riddle
Michael W Barley
Michael Witbrock
LRMLM&Ro
479
2
0
31 Jan 2025
MALMM: Multi-Agent Large Language Models for Zero-Shot Robotics Manipulation
MALMM: Multi-Agent Large Language Models for Zero-Shot Robotics Manipulation
Harsh Singh
Rocktim Jyoti Das
Mingfei Han
Preslav Nakov
Ivan Laptev
LM&RoLLMAG
374
14
0
26 Nov 2024
1