ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.06794
  4. Cited By
Cognitive Mirage: A Review of Hallucinations in Large Language Models

Cognitive Mirage: A Review of Hallucinations in Large Language Models

13 September 2023
Hongbin Ye
Tong Liu
Aijia Zhang
Wei Hua
Weiqiang Jia
    HILM
ArXivPDFHTML

Papers citing "Cognitive Mirage: A Review of Hallucinations in Large Language Models"

50 / 71 papers shown
Title
Uncertainty-Aware Decoding with Minimum Bayes Risk
Nico Daheim
Clara Meister
Thomas Möllenhoff
Iryna Gurevych
53
0
0
07 Mar 2025
Towards Understanding Text Hallucination of Diffusion Models via Local Generation Bias
Rui Lu
Runzhe Wang
Kaifeng Lyu
Xitai Jiang
Gao Huang
Mengdi Wang
DiffM
86
0
0
05 Mar 2025
On The Truthfulness of 'Surprisingly Likely' Responses of Large Language Models
On The Truthfulness of 'Surprisingly Likely' Responses of Large Language Models
Naman Goel
HILM
57
0
0
28 Jan 2025
LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and Mitigation
LLM Hallucinations in Practical Code Generation: Phenomena, Mechanism, and Mitigation
Ziyao Zhang
Yanlin Wang
Chong Wang
Jiachi Chen
Zibin Zheng
114
11
0
20 Jan 2025
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training
Shahrad Mohammadzadeh
Juan David Guerra
Marco Bonizzato
Reihaneh Rabbany
Golnoosh Farnadi
HILM
49
0
0
08 Jan 2025
Evaluating LLMs Capabilities Towards Understanding Social Dynamics
Evaluating LLMs Capabilities Towards Understanding Social Dynamics
Anique Tahir
Lu Cheng
Manuel Sandoval
Yasin N. Silva
Deborah L. Hall
Huan Liu
64
0
0
20 Nov 2024
DAWN: Designing Distributed Agents in a Worldwide Network
DAWN: Designing Distributed Agents in a Worldwide Network
Zahra Aminiranjbar
Jianan Tang
Qiudan Wang
Shubha Pant
Mahesh Viswanathan
LLMAG
AI4CE
23
1
0
11 Oct 2024
JurEE not Judges: safeguarding llm interactions with small, specialised
  Encoder Ensembles
JurEE not Judges: safeguarding llm interactions with small, specialised Encoder Ensembles
Dom Nasrabadi
24
1
0
11 Oct 2024
The Effects of Hallucinations in Synthetic Training Data for Relation
  Extraction
The Effects of Hallucinations in Synthetic Training Data for Relation Extraction
Steven Rogulsky
Nicholas Popovic
Michael Färber
HILM
23
1
0
10 Oct 2024
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in
  LMs
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs
Deema Alnuhait
Neeraja Kirtane
Muhammad Khalifa
Hao Peng
HILM
LRM
34
2
0
03 Oct 2024
FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"
FaithEval: Can Your Language Model Stay Faithful to Context, Even If "The Moon is Made of Marshmallows"
Yifei Ming
Senthil Purushwalkam
Shrey Pandit
Zixuan Ke
Xuan-Phi Nguyen
Caiming Xiong
Shafiq R. Joty
HILM
110
16
0
30 Sep 2024
A Novel Idea Generation Tool using a Structured Conversational AI (CAI)
  System
A Novel Idea Generation Tool using a Structured Conversational AI (CAI) System
B. Sankar
Dibakar Sen
LLMAG
LRM
29
4
0
09 Sep 2024
Blockchain for Large Language Model Security and Safety: A Holistic
  Survey
Blockchain for Large Language Model Security and Safety: A Holistic Survey
Caleb Geren
Amanda Board
Gaby G. Dagher
Tim Andersen
Jun Zhuang
44
5
0
26 Jul 2024
Knowledge Overshadowing Causes Amalgamated Hallucination in Large
  Language Models
Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models
Yuji Zhang
Sha Li
Jiateng Liu
Pengfei Yu
Yi Ren Fung
Jing Li
Manling Li
Heng Ji
29
10
0
10 Jul 2024
Why does in-context learning fail sometimes? Evaluating in-context
  learning on open and closed questions
Why does in-context learning fail sometimes? Evaluating in-context learning on open and closed questions
Xiang Li
Haoran Tang
Siyu Chen
Ziwei Wang
Ryan Chen
Marcin Abram
LRM
29
1
0
02 Jul 2024
Automated Text Scoring in the Age of Generative AI for the GPU-poor
Automated Text Scoring in the Age of Generative AI for the GPU-poor
C. Ormerod
Alexander Kwako
38
2
0
02 Jul 2024
Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges
Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges
Aman Singh Thakur
Kartik Choudhary
Venkat Srinik Ramayapally
Sankaran Vaidyanathan
Dieuwke Hupkes
ELM
ALM
45
55
0
18 Jun 2024
Understanding Hallucinations in Diffusion Models through Mode
  Interpolation
Understanding Hallucinations in Diffusion Models through Mode Interpolation
Sumukh K. Aithal
Pratyush Maini
Zachary Chase Lipton
J. Zico Kolter
DiffM
38
18
0
13 Jun 2024
MARS: Benchmarking the Metaphysical Reasoning Abilities of Language
  Models with a Multi-task Evaluation Dataset
MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset
Weiqi Wang
Yangqiu Song
LRM
35
8
0
04 Jun 2024
SaySelf: Teaching LLMs to Express Confidence with Self-Reflective
  Rationales
SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales
Tianyang Xu
Shujin Wu
Shizhe Diao
Xiaoze Liu
Xingyao Wang
Yangyi Chen
Jing Gao
LRM
29
27
0
31 May 2024
Data Augmentation for Text-based Person Retrieval Using Large Language
  Models
Data Augmentation for Text-based Person Retrieval Using Large Language Models
Zheng Li
Lijia Si
Caili Guo
Yang Yang
Qiushi Cao
33
3
0
20 May 2024
THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models
THRONE: An Object-based Hallucination Benchmark for the Free-form Generations of Large Vision-Language Models
Prannay Kaul
Zhizhong Li
Hao-Yu Yang
Yonatan Dukler
Ashwin Swaminathan
C. Taylor
Stefano Soatto
HILM
43
15
0
08 May 2024
Extracting chemical food safety hazards from the scientific literature
  automatically using large language models
Extracting chemical food safety hazards from the scientific literature automatically using large language models
Neris Özen
Wenjuan Mu
E. V. Asselt
L. Bulk
26
1
0
01 May 2024
A Survey on the Memory Mechanism of Large Language Model based Agents
A Survey on the Memory Mechanism of Large Language Model based Agents
Zeyu Zhang
Xiaohe Bo
Chen Ma
Rui Li
Xu Chen
Quanyu Dai
Jieming Zhu
Zhenhua Dong
Ji-Rong Wen
LLMAG
KELM
34
105
0
21 Apr 2024
Large Language Models Meet User Interfaces: The Case of Provisioning
  Feedback
Large Language Models Meet User Interfaces: The Case of Provisioning Feedback
Stanislav Pozdniakov
Jonathan Brazil
Solmaz Abdi
Aneesha Bakharia
Shazia Sadiq
D. Gašević
Paul Denny
Hassan Khosravi
ELM
29
13
0
17 Apr 2024
Multicalibration for Confidence Scoring in LLMs
Multicalibration for Confidence Scoring in LLMs
Gianluca Detommaso
Martín Bertrán
Riccardo Fogliato
Aaron Roth
24
12
0
06 Apr 2024
Source-Aware Training Enables Knowledge Attribution in Language Models
Source-Aware Training Enables Knowledge Attribution in Language Models
Muhammad Khalifa
David Wadden
Emma Strubell
Honglak Lee
Lu Wang
Iz Beltagy
Hao Peng
HILM
34
14
0
01 Apr 2024
Recommendation of data-free class-incremental learning algorithms by
  simulating future data
Recommendation of data-free class-incremental learning algorithms by simulating future data
Eva Feillet
Adrian Daniel Popescu
C´eline Hudelot
35
0
0
26 Mar 2024
ERBench: An Entity-Relationship based Automatically Verifiable
  Hallucination Benchmark for Large Language Models
ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models
Jio Oh
Soyeon Kim
Junseok Seo
Jindong Wang
Ruochen Xu
Xing Xie
Steven Euijong Whang
36
1
0
08 Mar 2024
Eternal Sunshine of the Mechanical Mind: The Irreconcilability of
  Machine Learning and the Right to be Forgotten
Eternal Sunshine of the Mechanical Mind: The Irreconcilability of Machine Learning and the Right to be Forgotten
Meem Arafat Manab
MU
21
1
0
06 Mar 2024
AutoSAT: Automatically Optimize SAT Solvers via Large Language Models
AutoSAT: Automatically Optimize SAT Solvers via Large Language Models
Yiwen Sun
Xianyin Zhang
Shiyu Huang
Shaowei Cai
Bing-Zhen Zhang
Ke Wei
21
2
0
16 Feb 2024
Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering
Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering
Jiaxiang Liu
Tong Zhou
Yubo Chen
Kang Liu
Jun Zhao
KELM
22
3
0
15 Feb 2024
Financial Report Chunking for Effective Retrieval Augmented Generation
Financial Report Chunking for Effective Retrieval Augmented Generation
Antonio Jimeno-Yepes
Yao You
Jan Milczek
Sebastian Laverde
Renyu Li
34
20
0
05 Feb 2024
IllusionX: An LLM-powered mixed reality personal companion
IllusionX: An LLM-powered mixed reality personal companion
Ramez Yousri
Zeyad Essam
Yehia Kareem
Youstina Sherief
Sherry Gamil
Soha Safwat
18
3
0
04 Feb 2024
A Survey on Large Language Model Hallucination via a Creativity
  Perspective
A Survey on Large Language Model Hallucination via a Creativity Perspective
Xuhui Jiang
Yuxing Tian
Fengrui Hua
Chengjin Xu
Yuanzhuo Wang
Jian Guo
LRM
19
22
0
02 Feb 2024
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and
  Cybersecurity
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity
Claudio Novelli
F. Casolari
Philipp Hacker
Giorgio Spedicato
Luciano Floridi
AILaw
SILM
42
41
0
14 Jan 2024
AI Hallucinations: A Misnomer Worth Clarifying
AI Hallucinations: A Misnomer Worth Clarifying
Negar Maleki
Balaji Padmanabhan
Kaushik Dutta
28
33
0
09 Jan 2024
Measurement in the Age of LLMs: An Application to Ideological Scaling
Measurement in the Age of LLMs: An Application to Ideological Scaling
Sean O'Hagan
Aaron Schein
40
8
0
14 Dec 2023
HALO: An Ontology for Representing and Categorizing Hallucinations in
  Large Language Models
HALO: An Ontology for Representing and Categorizing Hallucinations in Large Language Models
Navapat Nananukul
M. Kejriwal
HILM
24
3
0
08 Dec 2023
Towards Knowledge-driven Autonomous Driving
Towards Knowledge-driven Autonomous Driving
Xin Li
Yeqi Bai
Pinlong Cai
Licheng Wen
Daocheng Fu
...
Yikang Li
Botian Shi
Yong-Jin Liu
Liang He
Yu Qiao
32
26
0
07 Dec 2023
Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph
  Construction
Beyond Isolation: Multi-Agent Synergy for Improving Knowledge Graph Construction
Hongbin Ye
Honghao Gui
Aijia Zhang
Tong Liu
Wei Hua
Weiqiang Jia
LLMAG
15
5
0
05 Dec 2023
Interactive Multi-fidelity Learning for Cost-effective Adaptation of
  Language Model with Sparse Human Supervision
Interactive Multi-fidelity Learning for Cost-effective Adaptation of Language Model with Sparse Human Supervision
Jiaxin Zhang
Zhuohang Li
Kamalika Das
Kumar Sricharan
23
2
0
31 Oct 2023
FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
Xiang Chen
Duanzheng Song
Honghao Gui
Chengxi Wang
Ningyu Zhang
Jiang Yong
Fei Huang
Chengfei Lv
Dan Zhang
Huajun Chen
HILM
32
14
0
18 Oct 2023
Advancing Perception in Artificial Intelligence through Principles of
  Cognitive Science
Advancing Perception in Artificial Intelligence through Principles of Cognitive Science
Palaash Agrawal
Cheston Tan
Heena Rathore
39
1
0
13 Oct 2023
Survey on Factuality in Large Language Models: Knowledge, Retrieval and
  Domain-Specificity
Survey on Factuality in Large Language Models: Knowledge, Retrieval and Domain-Specificity
Cunxiang Wang
Xiaoze Liu
Yuanhao Yue
Xiangru Tang
Tianhang Zhang
...
Linyi Yang
Jindong Wang
Xing Xie
Zheng-Wei Zhang
Yue Zhang
HILM
KELM
51
170
0
11 Oct 2023
The Confidence-Competence Gap in Large Language Models: A Cognitive
  Study
The Confidence-Competence Gap in Large Language Models: A Cognitive Study
Aniket Kumar Singh
Suman Devkota
Bishal Lamichhane
Uttam Dhakal
Chandra Dhakal
LRM
18
9
0
28 Sep 2023
Can LLM-Generated Misinformation Be Detected?
Can LLM-Generated Misinformation Be Detected?
Canyu Chen
Kai Shu
DeLMO
27
157
0
25 Sep 2023
PACE-LM: Prompting and Augmentation for Calibrated Confidence Estimation
  with GPT-4 in Cloud Incident Root Cause Analysis
PACE-LM: Prompting and Augmentation for Calibrated Confidence Estimation with GPT-4 in Cloud Incident Root Cause Analysis
Dylan Zhang
Xuchao Zhang
Chetan Bansal
P. Las-Casas
Rodrigo Fonseca
Saravan Rajmohan
38
1
0
11 Sep 2023
Detecting and Mitigating Hallucinations in Multilingual Summarisation
Detecting and Mitigating Hallucinations in Multilingual Summarisation
Yifu Qiu
Yftah Ziser
Anna Korhonen
E. Ponti
Shay B. Cohen
HILM
49
42
0
23 May 2023
How Language Model Hallucinations Can Snowball
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
78
246
0
22 May 2023
12
Next