Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.09114
Cited By
Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification
15 November 2023
Haoqiang Kang
Juntong Ni
Huaxiu Yao
HILM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Ever: Mitigating Hallucination in Large Language Models through Real-Time Verification and Rectification"
32 / 32 papers shown
Title
Safeguarding Mobile GUI Agent via Logic-based Action Verification
Jungjae Lee
Dongjae Lee
Chihun Choi
Youngmin Im
Jaeyoung Wi
Kihong Heo
Sangeun Oh
Sunjae Lee
Insik Shin
LLMAG
75
0
0
24 Mar 2025
Quantifying the Robustness of Retrieval-Augmented Language Models Against Spurious Features in Grounding Data
Shiping Yang
Jie Wu
Wenbiao Ding
Ning Wu
Shining Liang
Ming Gong
Hengyuan Zhang
Dongmei Zhang
AAML
64
1
0
07 Mar 2025
Shakespearean Sparks: The Dance of Hallucination and Creativity in LLMs' Decoding Layers
Zicong He
Boxuan Zhang
Lu Cheng
42
0
0
04 Mar 2025
Improving Model Factuality with Fine-grained Critique-based Evaluator
Yiqing Xie
Wenxuan Zhou
Pradyot Prakash
Di Jin
Yuning Mao
...
Sinong Wang
Han Fang
Carolyn Rose
Daniel Fried
Hejia Zhang
HILM
25
5
0
24 Oct 2024
Atomic Fact Decomposition Helps Attributed Question Answering
Zhichao Yan
J. Wang
Jiaoyan Chen
Xiaoli Li
Ru Li
Jeff Z. Pan
KELM
HILM
24
0
0
22 Oct 2024
RAC: Efficient LLM Factuality Correction with Retrieval Augmentation
Changmao Li
Jeffrey Flanigan
KELM
LRM
14
0
0
21 Oct 2024
A Unified Hallucination Mitigation Framework for Large Vision-Language Models
Yue Chang
Liqiang Jing
Xiaopeng Zhang
Yue Zhang
VLM
MLLM
50
1
0
24 Sep 2024
SwiftDossier: Tailored Automatic Dossier for Drug Discovery with LLMs and Agents
Gabriele Fossi
Youssef Boulaimen
Leila Outemzabet
Nathalie Jeanray
Stephane Gerart
Sebastien Vachenc
Joanna Giemza
Salvatore Raieli
19
2
0
24 Sep 2024
Zero-resource Hallucination Detection for Text Generation via Graph-based Contextual Knowledge Triples Modeling
Xinyue Fang
Zhen Huang
Zhiliang Tian
Minghui Fang
Ziyi Pan
Quntian Fang
Zhihua Wen
Hengyue Pan
Dongsheng Li
HILM
80
2
0
17 Sep 2024
CLUE: Concept-Level Uncertainty Estimation for Large Language Models
Yu-Hsiang Wang
Andrew Bai
Che-Ping Tsai
Cho-Jui Hsieh
LRM
22
0
0
04 Sep 2024
Internal Consistency and Self-Feedback in Large Language Models: A Survey
Xun Liang
Shichao Song
Zifan Zheng
Hanyu Wang
Qingchen Yu
...
Rong-Hua Li
Peng Cheng
Zhonghao Wang
Feiyu Xiong
Zhiyu Li
HILM
LRM
54
23
0
19 Jul 2024
On Mitigating Code LLM Hallucinations with API Documentation
Nihal Jain
Robert Kwiatkowski
Baishakhi Ray
M. K. Ramanathan
Varun Kumar
25
7
0
13 Jul 2024
Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness
Khyathi Raghavi Chandu
Linjie Li
Anas Awadalla
Ximing Lu
Jae Sung Park
Jack Hessel
Lijuan Wang
Yejin Choi
30
2
0
02 Jul 2024
Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models
Weihong Zhong
Xiaocheng Feng
Liang Zhao
Qiming Li
Lei Huang
Yuxuan Gu
Weitao Ma
Yuan Xu
Bing Qin
MLLM
31
9
0
30 Jun 2024
Towards Fine-Grained Citation Evaluation in Generated Text: A Comparative Analysis of Faithfulness Metrics
Weijia Zhang
Mohammad Aliannejadi
Yifei Yuan
Jiahuan Pei
Jia-Hong Huang
Evangelos Kanoulas
HILM
21
12
0
21 Jun 2024
Investigating and Addressing Hallucinations of LLMs in Tasks Involving Negation
Neeraj Varshney
Satyam Raj
Venkatesh Mishra
Agneet Chatterjee
Ritika Sarkar
Amir Saeidi
Chitta Baral
LRM
18
7
0
08 Jun 2024
CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks
Maciej Besta
Lorenzo Paleari
Aleš Kubíček
Piotr Nyczyk
Robert Gerstenberger
Patrick Iff
Tomasz Lehmann
H. Niewiadomski
Torsten Hoefler
45
5
0
04 Jun 2024
Tool Learning in the Wild: Empowering Language Models as Automatic Tool Agents
Zhengliang Shi
Shen Gao
Xiuyi Chen
Yue Feng
Lingyong Yan
Haibo Shi
Dawei Yin
Zhumin Chen
Suzan Verberne
LLMAG
44
14
0
26 May 2024
LLMs can learn self-restraint through iterative self-reflection
Alexandre Piché
Aristides Milios
Dzmitry Bahdanau
Chris Pal
23
5
0
15 May 2024
ClashEval: Quantifying the tug-of-war between an LLM's internal prior and external evidence
Kevin Wu
Eric Wu
James Y. Zou
AAML
47
39
0
16 Apr 2024
FACTOID: FACtual enTailment fOr hallucInation Detection
Vipula Rawte
S. M. Towhidul
Krishnav Rajbangshi
Shravani Nag
Aman Chadha
Amit P. Sheth
Amitava Das
HILM
20
3
0
28 Mar 2024
HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild
Zhiying Zhu
Yiming Yang
Zhiqing Sun
HILM
VLM
26
10
0
07 Mar 2024
Retrieve Only When It Needs: Adaptive Retrieval Augmentation for Hallucination Mitigation in Large Language Models
Hanxing Ding
Liang Pang
Zihao Wei
Huawei Shen
Xueqi Cheng
HILM
RALM
67
15
0
16 Feb 2024
Factuality of Large Language Models in the Year 2024
Yuxia Wang
Minghan Wang
Muhammad Arslan Manzoor
Fei Liu
Georgi Georgiev
Rocktim Jyoti Das
Preslav Nakov
LRM
HILM
22
20
0
04 Feb 2024
Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity
Claudio Novelli
F. Casolari
Philipp Hacker
Giorgio Spedicato
Luciano Floridi
AILaw
SILM
34
41
0
14 Jan 2024
A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models
S.M. Towhidul Islam Tonmoy
S. M. M. Zaman
Vinija Jain
Anku Rani
Vipula Rawte
Aman Chadha
Amitava Das
HILM
19
175
0
02 Jan 2024
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Akari Asai
Zeqiu Wu
Yizhong Wang
Avirup Sil
Hannaneh Hajishirzi
RALM
138
600
0
17 Oct 2023
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
75
246
0
22 May 2023
"According to ...": Prompting Language Models Improves Quoting from Pre-Training Data
Orion Weller
Marc Marone
Nathaniel Weir
Dawn J Lawrie
Daniel Khashabi
Benjamin Van Durme
HILM
61
44
0
22 May 2023
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
1