Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.11817
Cited By
Hallucination is Inevitable: An Innate Limitation of Large Language Models
22 January 2024
Ziwei Xu
Sanjay Jain
Mohan S. Kankanhalli
HILM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Hallucination is Inevitable: An Innate Limitation of Large Language Models"
16 / 116 papers shown
Title
EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimodal Large Language Models
Shangyu Xing
Fei Zhao
Zhen Wu
Tuo An
Weihao Chen
Chunhui Li
Jianbing Zhang
Xinyu Dai
MLLM
MU
39
5
0
15 Feb 2024
Building Guardrails for Large Language Models
Yizhen Dong
Ronghui Mu
Gao Jin
Yi Qi
Jinwei Hu
Xingyu Zhao
Jie Meng
Wenjie Ruan
Xiaowei Huang
OffRL
57
23
0
02 Feb 2024
LLaMP: Large Language Model Made Powerful for High-fidelity Materials Knowledge Retrieval and Distillation
Chiang Yuan
Elvis Hsieh
Chia-Hong Chou
Janosh Riebesell
14
9
0
30 Jan 2024
Hallucination Detection and Hallucination Mitigation: An Investigation
Junliang Luo
Tianyu Li
Di Wu
Michael R. M. Jenkin
Steve Liu
Gregory Dudek
HILM
LLMAG
26
20
0
16 Jan 2024
Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models
Matthew Dahl
Varun Magesh
Mirac Suzgun
Daniel E. Ho
HILM
AILaw
17
71
0
02 Jan 2024
Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges
Qingyao Li
Lingyue Fu
Weiming Zhang
Xianyu Chen
Jingwei Yu
Wei Xia
Weinan Zhang
Ruiming Tang
Yong Yu
AI4Ed
ELM
20
16
0
27 Dec 2023
SelfEval: Leveraging the discriminative nature of generative models for evaluation
Sai Saketh Rambhatla
Ishan Misra
EGVM
17
4
0
17 Nov 2023
Automated Annotation of Scientific Texts for ML-based Keyphrase Extraction and Validation
O. Amusat
Harshad B. Hegde
Christopher J. Mungall
Anna Giannakou
Neil Byers
Dan Gunter
Kjiersten Fagnan
Lavanya Ramakrishnan
11
2
0
08 Nov 2023
SCOTT: Self-Consistent Chain-of-Thought Distillation
Jamie Yap
Zhengyang Wang
Zheng Li
K. Lynch
Bing Yin
Xiang Ren
LRM
57
91
0
03 May 2023
Do large language models resemble humans in language use?
Zhenguang G. Cai
Xufeng Duan
David A. Haslett
Shuqi Wang
M. Pickering
ALM
67
37
0
10 Mar 2023
A Survey on Uncertainty Quantification Methods for Deep Learning
Wenchong He
Zhe Jiang
Tingsong Xiao
Zelin Xu
Yukun Li
BDL
UQCV
AI4CE
6
16
0
26 Feb 2023
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
15
12
0
19 Aug 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,261
0
28 Jan 2022
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
234
447
0
14 Jul 2021
The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey
Yi-Chong Huang
Xiachong Feng
Xiaocheng Feng
Bing Qin
HILM
115
90
0
30 Apr 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
236
1,508
0
31 Dec 2020
Previous
1
2
3