ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.02593
  4. Cited By
Detecting Hallucinated Content in Conditional Neural Sequence Generation

Detecting Hallucinated Content in Conditional Neural Sequence Generation

5 November 2020
Chunting Zhou
Graham Neubig
Jiatao Gu
Mona T. Diab
P. Guzmán
Luke Zettlemoyer
Marjan Ghazvininejad
    HILM
ArXivPDFHTML

Papers citing "Detecting Hallucinated Content in Conditional Neural Sequence Generation"

50 / 137 papers shown
Title
Benchmarking LLM Faithfulness in RAG with Evolving Leaderboards
Benchmarking LLM Faithfulness in RAG with Evolving Leaderboards
Manveer Singh Tamber
F. S. Bao
Chenyu Xu
Ge Luo
Suleman Kazi
Minseok Bae
Miaoran Li
Ofer Mendelevitch
Renyi Qu
Jimmy J. Lin
VLM
21
0
0
07 May 2025
UCSC at SemEval-2025 Task 3: Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM Output
UCSC at SemEval-2025 Task 3: Context, Models and Prompt Optimization for Automated Hallucination Detection in LLM Output
Sicong Huang
Jincheng He
Shiyuan Huang
Karthik Raja Anandan
Arkajyoti Chakraborty
Ian Lane
HILM
LRM
29
0
0
05 May 2025
DualRAG: A Dual-Process Approach to Integrate Reasoning and Retrieval for Multi-Hop Question Answering
DualRAG: A Dual-Process Approach to Integrate Reasoning and Retrieval for Multi-Hop Question Answering
Rong Cheng
J. Liu
Yan Zheng
Fei Ni
Jiazhen Du
Hangyu Mao
Fuzheng Zhang
Bo-Lan Wang
Jianye Hao
LRM
54
0
0
25 Apr 2025
SemEval-2025 Task 3: Mu-SHROOM, the Multilingual Shared Task on Hallucinations and Related Observable Overgeneration Mistakes
SemEval-2025 Task 3: Mu-SHROOM, the Multilingual Shared Task on Hallucinations and Related Observable Overgeneration Mistakes
Raúl Vázquez
Timothee Mickus
Elaine Zosa
Teemu Vahtola
Jörg Tiedemann
...
Liane Guillou
Ona de Gibert
Jaione Bengoetxea
Joseph Attieh
Marianna Apidianaki
HILM
VLM
LRM
85
0
0
16 Apr 2025
Graph of AI Ideas: Leveraging Knowledge Graphs and LLMs for AI Research Idea Generation
Xian Gao
Zongyun Zhang
Mingye Xie
Ting Liu
Yuzhuo Fu
39
0
0
11 Mar 2025
FilterRAG: Zero-Shot Informed Retrieval-Augmented Generation to Mitigate Hallucinations in VQA
FilterRAG: Zero-Shot Informed Retrieval-Augmented Generation to Mitigate Hallucinations in VQA
S M Sarwar
66
1
0
25 Feb 2025
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild
How Much Do LLMs Hallucinate across Languages? On Multilingual Estimation of LLM Hallucination in the Wild
Saad Obaid ul Islam
Anne Lauscher
Goran Glavas
HILM
LRM
115
1
0
21 Feb 2025
Can Hallucination Correction Improve Video-Language Alignment?
Can Hallucination Correction Improve Video-Language Alignment?
Lingjun Zhao
Mingyang Xie
Paola Cascante-Bonilla
Hal Daumé III
Kwonjoon Lee
HILM
VLM
57
0
0
20 Feb 2025
LLM-assisted Physical Invariant Extraction for Cyber-Physical Systems Anomaly Detection
Danial Abshari
Chenglong Fu
Meera Sridhar
39
7
0
17 Nov 2024
AssistRAG: Boosting the Potential of Large Language Models with an
  Intelligent Information Assistant
AssistRAG: Boosting the Potential of Large Language Models with an Intelligent Information Assistant
Yujia Zhou
Zheng Liu
Zhicheng Dou
AIFin
LRM
RALM
31
2
0
11 Nov 2024
Prompt-Guided Internal States for Hallucination Detection of Large Language Models
Prompt-Guided Internal States for Hallucination Detection of Large Language Models
Fujie Zhang
Peiqi Yu
Biao Yi
Baolei Zhang
Tong Li
Zheli Liu
HILM
LRM
50
0
0
07 Nov 2024
From Novice to Expert: LLM Agent Policy Optimization via Step-wise
  Reinforcement Learning
From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning
Zhirui Deng
Zhicheng Dou
Y. X. Zhu
Ji-Rong Wen
Ruibin Xiong
Mang Wang
Weipeng Chen
26
6
0
06 Nov 2024
Improving Uncertainty Quantification in Large Language Models via
  Semantic Embeddings
Improving Uncertainty Quantification in Large Language Models via Semantic Embeddings
Yashvir S. Grewal
Edwin V. Bonilla
Thang D. Bui
UQCV
25
3
0
30 Oct 2024
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination
Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination
Jerry Huang
Prasanna Parthasarathi
Mehdi Rezagholizadeh
Boxing Chen
Sarath Chandar
48
0
0
22 Oct 2024
Evaluating Self-Generated Documents for Enhancing Retrieval-Augmented Generation with Large Language Models
Evaluating Self-Generated Documents for Enhancing Retrieval-Augmented Generation with Large Language Models
Jiatao Li
Xinyu Hu
Xunjian Yin
Xiaojun Wan
RALM
48
0
0
17 Oct 2024
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in
  LMs
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs
Deema Alnuhait
Neeraja Kirtane
Muhammad Khalifa
Hao Peng
HILM
LRM
34
2
0
03 Oct 2024
SMART-RAG: Selection using Determinantal Matrices for Augmented
  Retrieval
SMART-RAG: Selection using Determinantal Matrices for Augmented Retrieval
Jiatao Li
Xinyu Hu
Xiaojun Wan
19
1
0
21 Sep 2024
Evaluating the Translation Performance of Large Language Models Based on
  Euas-20
Evaluating the Translation Performance of Large Language Models Based on Euas-20
Yan Huang
Wei Liu
ELM
35
0
0
06 Aug 2024
Mitigating Entity-Level Hallucination in Large Language Models
Mitigating Entity-Level Hallucination in Large Language Models
Weihang Su
Yichen Tang
Qingyao Ai
Changyue Wang
Zhijing Wu
Yiqun Liu
HILM
27
6
0
12 Jul 2024
From Loops to Oops: Fallback Behaviors of Language Models Under Uncertainty
From Loops to Oops: Fallback Behaviors of Language Models Under Uncertainty
Maor Ivgi
Ori Yoran
Jonathan Berant
Mor Geva
HILM
47
8
0
08 Jul 2024
Hallucination Detection: Robustly Discerning Reliable Answers in Large
  Language Models
Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models
Yuyan Chen
Qiang Fu
Yichen Yuan
Zhihao Wen
Ge Fan
Dayiheng Liu
Dongmei Zhang
Zhixu Li
Yanghua Xiao
HILM
42
68
0
04 Jul 2024
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and
  Aleatoric Awareness
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness
Khyathi Raghavi Chandu
Linjie Li
Anas Awadalla
Ximing Lu
Jae Sung Park
Jack Hessel
Lijuan Wang
Yejin Choi
41
2
0
02 Jul 2024
VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in
  Large Video-Language Models
VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models
Yuxuan Wang
Yueqian Wang
Dongyan Zhao
Cihang Xie
Zilong Zheng
MLLM
VLM
39
25
0
24 Jun 2024
REAL Sampling: Boosting Factuality and Diversity of Open-Ended
  Generation via Asymptotic Entropy
REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy
Haw-Shiuan Chang
Nanyun Peng
Mohit Bansal
Anil Ramakrishna
Tagyoung Chung
HILM
33
2
0
11 Jun 2024
The Task-oriented Queries Benchmark (ToQB)
The Task-oriented Queries Benchmark (ToQB)
Keun Soo Yim
36
1
0
05 Jun 2024
Confidence-Aware Sub-Structure Beam Search (CABS): Mitigating
  Hallucination in Structured Data Generation with Large Language Models
Confidence-Aware Sub-Structure Beam Search (CABS): Mitigating Hallucination in Structured Data Generation with Large Language Models
Chengwei Wei
Kee Kiat Koo
Amir Tavanaei
Karim Bouyarmane
22
1
0
30 May 2024
Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding
Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding
Shenghuan Sun
Gregory M. Goldgof
Alexander Schubert
Zhiqing Sun
Thomas Hartvigsen
A. Butte
Ahmed Alaa
LM&MA
27
4
0
29 May 2024
Text Generation: A Systematic Literature Review of Tasks, Evaluation,
  and Challenges
Text Generation: A Systematic Literature Review of Tasks, Evaluation, and Challenges
Jonas Becker
Jan Philip Wahle
Bela Gipp
Terry Ruas
23
9
0
24 May 2024
Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced Optimization
Alleviating Hallucinations in Large Vision-Language Models through Hallucination-Induced Optimization
Beitao Chen
Xinyu Lyu
Lianli Gao
Jingkuan Song
Hengtao Shen
MLLM
54
10
0
24 May 2024
Medical Dialogue: A Survey of Categories, Methods, Evaluation and
  Challenges
Medical Dialogue: A Survey of Categories, Methods, Evaluation and Challenges
Xiaoming Shi
Zeming Liu
Li Du
Yuxuan Wang
Hongru Wang
Yuhang Guo
Tong Ruan
Jie Xu
Shaoting Zhang
LM&MA
ELM
32
1
0
17 May 2024
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of
  Theories, Detection Methods, and Opportunities
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities
Xiaomin Yu
Yezhaohui Wang
Yanfang Chen
Zhen Tao
Dinghao Xi
Shichao Song
Simin Niu
Zhiyu Li
62
7
0
25 Apr 2024
Is Table Retrieval a Solved Problem? Exploring Join-Aware Multi-Table Retrieval
Is Table Retrieval a Solved Problem? Exploring Join-Aware Multi-Table Retrieval
Peter Baile Chen
Yi Zhang
Dan Roth
LMTD
44
12
0
15 Apr 2024
Mitigating Hallucination in Abstractive Summarization with
  Domain-Conditional Mutual Information
Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual Information
Kyubyung Chae
Jaepill Choi
Yohan Jo
Taesup Kim
HILM
20
1
0
15 Apr 2024
Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path
  Forward
Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward
Xuan Xie
Jiayang Song
Zhehua Zhou
Yuheng Huang
Da Song
Lei Ma
OffRL
35
6
0
12 Apr 2024
Know When To Stop: A Study of Semantic Drift in Text Generation
Know When To Stop: A Study of Semantic Drift in Text Generation
Ava Spataru
Eric Hambro
Elena Voita
Nicola Cancedda
26
3
0
08 Apr 2024
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded
  Dialogue Generation
A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation
Jifan Yu
Xiaohan Zhang
Yifan Xu
Xuanyu Lei
Zijun Yao
Jing Zhang
Lei Hou
Juanzi Li
HILM
28
1
0
04 Apr 2024
VURF: A General-purpose Reasoning and Self-refinement Framework for Video Understanding
VURF: A General-purpose Reasoning and Self-refinement Framework for Video Understanding
Ahmad A Mahmood
Ashmal Vayani
Muzammal Naseer
Salman Khan
Fahad Shahbaz Khan
LRM
49
7
0
21 Mar 2024
DRAGIN: Dynamic Retrieval Augmented Generation based on the Information
  Needs of Large Language Models
DRAGIN: Dynamic Retrieval Augmented Generation based on the Information Needs of Large Language Models
Weihang Su
Yichen Tang
Qingyao Ai
Zhijing Wu
Yiqun Liu
3DV
RALM
AI4TS
SyDa
48
18
0
15 Mar 2024
SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and
  Related Observable Overgeneration Mistakes
SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes
Timothee Mickus
Elaine Zosa
Raúl Vázquez
Teemu Vahtola
Jörg Tiedemann
Vincent Segonne
Alessandro Raganato
Marianna Apidianaki
HILM
LRM
27
20
0
12 Mar 2024
Unsupervised Real-Time Hallucination Detection based on the Internal
  States of Large Language Models
Unsupervised Real-Time Hallucination Detection based on the Internal States of Large Language Models
Weihang Su
Changyue Wang
Qingyao Ai
Hu Yiran
Zhijing Wu
Yujia Zhou
Yiqun Liu
HILM
34
28
0
11 Mar 2024
On the Benefits of Fine-Grained Loss Truncation: A Case Study on
  Factuality in Summarization
On the Benefits of Fine-Grained Loss Truncation: A Case Study on Factuality in Summarization
Lorenzo Jaime Yu Flores
Arman Cohan
HILM
38
2
0
09 Mar 2024
HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild
HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild
Zhiying Zhu
Yiming Yang
Zhiqing Sun
HILM
VLM
41
14
0
07 Mar 2024
Successfully Guiding Humans with Imperfect Instructions by Highlighting
  Potential Errors and Suggesting Corrections
Successfully Guiding Humans with Imperfect Instructions by Highlighting Potential Errors and Suggesting Corrections
Lingjun Zhao
Khanh Nguyen
Hal Daumé
28
1
0
26 Feb 2024
A Data-Centric Approach To Generate Faithful and High Quality Patient
  Summaries with Large Language Models
A Data-Centric Approach To Generate Faithful and High Quality Patient Summaries with Large Language Models
S. Hegselmann
Zejiang Shen
Florian Gierse
Monica Agrawal
David Sontag
Xiaoyi Jiang
HILM
VLM
24
6
0
23 Feb 2024
Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When
  and What to Retrieve for LLMs
Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs
Jiejun Tan
Zhicheng Dou
Yutao Zhu
Peidong Guo
Kun Fang
Ji-Rong Wen
42
23
0
19 Feb 2024
Metacognitive Retrieval-Augmented Large Language Models
Metacognitive Retrieval-Augmented Large Language Models
Yujia Zhou
Zheng Liu
Jiajie Jin
Jian-yun Nie
Zhicheng Dou
RALM
KELM
AIFin
LRM
27
14
0
18 Feb 2024
Redefining "Hallucination" in LLMs: Towards a psychology-informed
  framework for mitigating misinformation
Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation
Elijah Berberette
Jack Hutchins
Amir Sadovnik
14
9
0
01 Feb 2024
Hallucination Detection and Hallucination Mitigation: An Investigation
Hallucination Detection and Hallucination Mitigation: An Investigation
Junliang Luo
Tianyu Li
Di Wu
Michael R. M. Jenkin
Steve Liu
Gregory Dudek
HILM
LLMAG
39
22
0
16 Jan 2024
AI Hallucinations: A Misnomer Worth Clarifying
AI Hallucinations: A Misnomer Worth Clarifying
Negar Maleki
Balaji Padmanabhan
Kaushik Dutta
28
34
0
09 Jan 2024
Don't Believe Everything You Read: Enhancing Summarization
  Interpretability through Automatic Identification of Hallucinations in Large
  Language Models
Don't Believe Everything You Read: Enhancing Summarization Interpretability through Automatic Identification of Hallucinations in Large Language Models
Priyesh Vakharia
Devavrat Joshi
Meenal Chavan
Dhananjay Sonawane
Bhrigu Garg
Parsa Mazaheri
HILM
25
1
0
22 Dec 2023
123
Next