ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.11817
  4. Cited By
Hallucination is Inevitable: An Innate Limitation of Large Language Models

Hallucination is Inevitable: An Innate Limitation of Large Language Models

22 January 2024
Ziwei Xu
Sanjay Jain
Mohan S. Kankanhalli
    HILM
    LRM
ArXivPDFHTML

Papers citing "Hallucination is Inevitable: An Innate Limitation of Large Language Models"

50 / 116 papers shown
Title
Fine-Tuning Large Language Models and Evaluating Retrieval Methods for Improved Question Answering on Building Codes
Fine-Tuning Large Language Models and Evaluating Retrieval Methods for Improved Question Answering on Building Codes
Mohammad Aqib
Mohd Hamza
Qipei Mei
Ying Hei Chui
RALM
ELM
37
0
0
07 May 2025
LLMpatronous: Harnessing the Power of LLMs For Vulnerability Detection
LLMpatronous: Harnessing the Power of LLMs For Vulnerability Detection
Rajesh Yarra
29
0
0
25 Apr 2025
CoheMark: A Novel Sentence-Level Watermark for Enhanced Text Quality
CoheMark: A Novel Sentence-Level Watermark for Enhanced Text Quality
Junyan Zhang
Shuliang Liu
Aiwei Liu
Yubo Gao
J. Li
Xiaojie Gu
Xuming Hu
WaLM
42
2
0
24 Apr 2025
aiXamine: Simplified LLM Safety and Security
aiXamine: Simplified LLM Safety and Security
Fatih Deniz
Dorde Popovic
Yazan Boshmaf
Euisuh Jeong
M. Ahmad
Sanjay Chawla
Issa M. Khalil
ELM
70
0
0
21 Apr 2025
Beyond Misinformation: A Conceptual Framework for Studying AI Hallucinations in (Science) Communication
Beyond Misinformation: A Conceptual Framework for Studying AI Hallucinations in (Science) Communication
Anqi Shao
52
0
0
18 Apr 2025
Purposefully Induced Psychosis (PIP): Embracing Hallucination as Imagination in Large Language Models
Purposefully Induced Psychosis (PIP): Embracing Hallucination as Imagination in Large Language Models
Kris Pilcher
Esen K. Tütüncü
LLMAG
45
0
0
16 Apr 2025
HalluSearch at SemEval-2025 Task 3: A Search-Enhanced RAG Pipeline for Hallucination Detection
HalluSearch at SemEval-2025 Task 3: A Search-Enhanced RAG Pipeline for Hallucination Detection
Mohamed A. Abdallah
S. El-Beltagy
HILM
18
1
0
14 Apr 2025
Hallucination, reliability, and the role of generative AI in science
Hallucination, reliability, and the role of generative AI in science
Charles Rathkopf
HILM
30
0
0
11 Apr 2025
Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill?
Missing Premise exacerbates Overthinking: Are Reasoning Models losing Critical Thinking Skill?
Chenrui Fan
Ming Li
Lichao Sun
Tianyi Zhou
LRM
41
2
0
09 Apr 2025
Unlocking the Potential of Past Research: Using Generative AI to Reconstruct Healthcare Simulation Models
Unlocking the Potential of Past Research: Using Generative AI to Reconstruct Healthcare Simulation Models
Thomas Monks
Alison Harper
Amy Heather
39
0
0
27 Mar 2025
OAEI-LLM-T: A TBox Benchmark Dataset for Understanding Large Language Model Hallucinations in Ontology Matching
OAEI-LLM-T: A TBox Benchmark Dataset for Understanding Large Language Model Hallucinations in Ontology Matching
Zhangcheng Qiang
50
0
0
25 Mar 2025
AgentRxiv: Towards Collaborative Autonomous Research
AgentRxiv: Towards Collaborative Autonomous Research
Samuel Schmidgall
Michael Moor
47
2
0
23 Mar 2025
LeRAAT: LLM-Enabled Real-Time Aviation Advisory Tool
LeRAAT: LLM-Enabled Real-Time Aviation Advisory Tool
Marc R. Schlichting
Vale Rasmussen
Heba Alazzeh
Houjun Liu
Kiana Jafari
Amelia Hardy
Dylan M. Asmar
Mykel J. Kochenderfer
32
0
0
05 Mar 2025
Graph-Augmented Reasoning: Evolving Step-by-Step Knowledge Graph Retrieval for LLM Reasoning
Wenjie Wu
Yongcheng Jing
Yingjie Wang
Wenbin Hu
Dacheng Tao
RALM
LRM
56
2
0
03 Mar 2025
Grandes modelos de lenguaje: de la predicción de palabras a la comprensión?
Grandes modelos de lenguaje: de la predicción de palabras a la comprensión?
Carlos Gómez-Rodríguez
SyDa
AILaw
ELM
VLM
94
0
0
25 Feb 2025
`Generalization is hallucination' through the lens of tensor completions
`Generalization is hallucination' through the lens of tensor completions
Liang Ze Wong
VLM
48
0
0
24 Feb 2025
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?
The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?
Zhenheng Tang
Xiang Liu
Qian Wang
Peijie Dong
Bingsheng He
Xiaowen Chu
Bo Li
LRM
41
1
0
24 Feb 2025
Hallucination Detection in Large Language Models with Metamorphic Relations
Hallucination Detection in Large Language Models with Metamorphic Relations
Borui Yang
Md Afif Al Mamun
Jie M. Zhang
Gias Uddin
HILM
51
0
0
20 Feb 2025
Can Your Uncertainty Scores Detect Hallucinated Entity?
Can Your Uncertainty Scores Detect Hallucinated Entity?
Min-Hsuan Yeh
Max Kamachee
Seongheon Park
Yixuan Li
HILM
36
1
0
17 Feb 2025
Automated Consistency Analysis of LLMs
Automated Consistency Analysis of LLMs
Aditya Patwardhan
Vivek Vaidya
Ashish Kundu
50
0
0
10 Feb 2025
Delta - Contrastive Decoding Mitigates Text Hallucinations in Large Language Models
Delta - Contrastive Decoding Mitigates Text Hallucinations in Large Language Models
Cheng Peng Huang
Hao-Yuan Chen
HILM
56
0
0
09 Feb 2025
Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning
Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning
Yibo Yan
Shen Wang
Jiahao Huo
Jingheng Ye
Zhendong Chu
Xuming Hu
Philip S. Yu
Carla P. Gomes
B. Selman
Qingsong Wen
LRM
111
9
0
05 Feb 2025
Breaking Focus: Contextual Distraction Curse in Large Language Models
Breaking Focus: Contextual Distraction Curse in Large Language Models
Yue Huang
Yanbo Wang
Zixiang Xu
Chujie Gao
Siyuan Wu
Jiayi Ye
Xiuying Chen
Pin-Yu Chen
X. Zhang
AAML
38
1
0
03 Feb 2025
Risk-Aware Distributional Intervention Policies for Language Models
Bao Nguyen
Binh Nguyen
Duy Nguyen
V. Nguyen
28
1
0
28 Jan 2025
Personalizing Education through an Adaptive LMS with Integrated LLMs
Kyle Spriggs
Meng Cheng Lau
Kalpdrum Passi
AI4Ed
46
0
0
24 Jan 2025
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training
Shahrad Mohammadzadeh
Juan David Guerra
Marco Bonizzato
Reihaneh Rabbany
Golnoosh Farnadi
HILM
39
0
0
08 Jan 2025
Visual Large Language Models for Generalized and Specialized Applications
Yifan Li
Zhixin Lai
Wentao Bao
Zhen Tan
Anh Dao
Kewei Sui
Jiayi Shen
Dong Liu
Huan Liu
Yu Kong
VLM
83
10
0
06 Jan 2025
Can Large Audio-Language Models Truly Hear? Tackling Hallucinations with Multi-Task Assessment and Stepwise Audio Reasoning
Can Large Audio-Language Models Truly Hear? Tackling Hallucinations with Multi-Task Assessment and Stepwise Audio Reasoning
Chun-Yi Kuan
Hung-yi Lee
AuLLM
LRM
48
1
0
03 Jan 2025
TinyLLM: A Framework for Training and Deploying Language Models at the
  Edge Computers
TinyLLM: A Framework for Training and Deploying Language Models at the Edge Computers
Savitha Viswanadh Kandala
Pramuka Medaranga
Ambuj Varshney
58
1
0
19 Dec 2024
Exploring Facets of Language Generation in the Limit
Exploring Facets of Language Generation in the Limit
Moses Charikar
Chirag Pabbaraju
LRM
65
1
0
22 Nov 2024
Towards a Middleware for Large Language Models
Towards a Middleware for Large Language Models
Narcisa Guran
Florian Knauf
Man Ngo
Stefan Petrescu
Jan S. Rellermeyer
59
1
0
21 Nov 2024
Measuring Non-Adversarial Reproduction of Training Data in Large
  Language Models
Measuring Non-Adversarial Reproduction of Training Data in Large Language Models
Michael Aerni
Javier Rando
Edoardo Debenedetti
Nicholas Carlini
Daphne Ippolito
F. Tramèr
32
3
0
15 Nov 2024
LLM Hallucination Reasoning with Zero-shot Knowledge Test
LLM Hallucination Reasoning with Zero-shot Knowledge Test
Seongmin Lee
Hsiang Hsu
Chun-Fu Chen
LRM
31
2
0
14 Nov 2024
CriticAL: Critic Automation with Language Models
CriticAL: Critic Automation with Language Models
Michael Y. Li
Vivek Vajipey
Noah D. Goodman
Emily B. Fox
17
0
0
10 Nov 2024
Climate AI for Corporate Decarbonization Metrics Extraction
Climate AI for Corporate Decarbonization Metrics Extraction
Aditya Dave
Mengchen Zhu
Dapeng Hu
Sachin Tiwari
23
0
0
05 Nov 2024
The Potential of LLMs in Medical Education: Generating Questions and Answers for Qualification Exams
The Potential of LLMs in Medical Education: Generating Questions and Answers for Qualification Exams
Yunqi Zhu
Wen Tang
Ying Sun
Xuebing Yang
Liyang Dou
Yifan Gu
Yuanyuan Wu
Wensheng Zhang
Ying Sun
Xuebing Yang
LM&MA
ELM
24
1
0
31 Oct 2024
No Free Lunch: Fundamental Limits of Learning Non-Hallucinating
  Generative Models
No Free Lunch: Fundamental Limits of Learning Non-Hallucinating Generative Models
Changlong Wu
A. Grama
Wojciech Szpankowski
22
1
0
24 Oct 2024
Opportunities and Challenges of Generative-AI in Finance
Opportunities and Challenges of Generative-AI in Finance
Akshar Prabhu Desai
Ganesh Satish Mallya
Mohammad Luqman
Tejasvi Ravi
Nithya Kota
Pranjul Yadav
AIFin
28
2
0
21 Oct 2024
RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rewards
RAG-DDR: Optimizing Retrieval-Augmented Generation Using Differentiable Data Rewards
Xinze Li
Sen Mei
Zhenghao Liu
Yukun Yan
Shuo Wang
...
H. Chen
Ge Yu
Zhiyuan Liu
Maosong Sun
Chenyan Xiong
33
6
0
17 Oct 2024
REFINE on Scarce Data: Retrieval Enhancement through Fine-Tuning via
  Model Fusion of Embedding Models
REFINE on Scarce Data: Retrieval Enhancement through Fine-Tuning via Model Fusion of Embedding Models
Ambuje Gupta
Mrinal Rawat
Andreas Stolcke
Roberto Pieraccini
RALM
9
1
0
16 Oct 2024
On Classification with Large Language Models in Cultural Analytics
On Classification with Large Language Models in Cultural Analytics
David Bamman
Kent K. Chang
L. Lucy
Naitian Zhou
13
0
0
15 Oct 2024
A Theoretical Survey on Foundation Models
A Theoretical Survey on Foundation Models
Shi Fu
Yuzhu Chen
Yingjie Wang
Dacheng Tao
16
0
0
15 Oct 2024
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Chenxi Wang
Xiang Chen
N. Zhang
Bozhong Tian
Haoming Xu
Shumin Deng
H. Chen
MLLM
LRM
16
4
0
15 Oct 2024
DAWN: Designing Distributed Agents in a Worldwide Network
DAWN: Designing Distributed Agents in a Worldwide Network
Zahra Aminiranjbar
Jianan Tang
Qiudan Wang
Shubha Pant
Mahesh Viswanathan
LLMAG
AI4CE
15
1
0
11 Oct 2024
The Dynamics of Social Conventions in LLM populations: Spontaneous
  Emergence, Collective Biases and Tipping Points
The Dynamics of Social Conventions in LLM populations: Spontaneous Emergence, Collective Biases and Tipping Points
Ariel Flint Ashery
L. Aiello
Andrea Baronchelli
AI4CE
16
1
0
11 Oct 2024
A Closer Look at Machine Unlearning for Large Language Models
A Closer Look at Machine Unlearning for Large Language Models
Xiaojian Yuan
Tianyu Pang
Chao Du
Kejiang Chen
Weiming Zhang
Min-Bin Lin
MU
28
5
0
10 Oct 2024
HE-Drive: Human-Like End-to-End Driving with Vision Language Models
HE-Drive: Human-Like End-to-End Driving with Vision Language Models
Junming Wang
Xingyu Zhang
Zebin Xing
Songen Gu
Xiaoyang Guo
Yang Hu
Ziying Song
Qian Zhang
Xiaoxiao Long
Wei Yin
31
0
0
07 Oct 2024
Mitigating Hallucinations Using Ensemble of Knowledge Graph and Vector
  Store in Large Language Models to Enhance Mental Health Support
Mitigating Hallucinations Using Ensemble of Knowledge Graph and Vector Store in Large Language Models to Enhance Mental Health Support
Abdul Muqtadir
H. M. Bilal
Ayesha Yousaf
Hafiz Farooq Ahmed
Jamil Hussain
AI4MH
11
0
0
06 Oct 2024
An X-Ray Is Worth 15 Features: Sparse Autoencoders for Interpretable
  Radiology Report Generation
An X-Ray Is Worth 15 Features: Sparse Autoencoders for Interpretable Radiology Report Generation
Ahmed Abdulaal
Hugo Fry
Nina Montaña-Brown
Ayodeji Ijishakin
Jack Gao
Stephanie L. Hyland
Daniel C. Alexander
Daniel Coelho De Castro
MedIm
25
7
0
04 Oct 2024
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in
  LMs
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs
Deema Alnuhait
Neeraja Kirtane
Muhammad Khalifa
Hao Peng
HILM
LRM
24
1
0
03 Oct 2024
123
Next