Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.05209
Cited By
HALO: An Ontology for Representing and Categorizing Hallucinations in Large Language Models
8 December 2023
Navapat Nananukul
M. Kejriwal
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"HALO: An Ontology for Representing and Categorizing Hallucinations in Large Language Models"
8 / 8 papers shown
Title
PhD: A ChatGPT-Prompted Visual hallucination Evaluation Dataset
Jiazhen Liu
Yuhan Fu
Ruobing Xie
Runquan Xie
Xingwu Sun
Fengzong Lian
Zhanhui Kang
Xirong Li
MLLM
26
10
0
17 Mar 2024
Generative Models and Connected and Automated Vehicles: A Survey in Exploring the Intersection of Transportation and AI
Dong Shu
Zhouyao Zhu
18
1
0
14 Mar 2024
Measuring and Reducing LLM Hallucination without Gold-Standard Answers
Jiaheng Wei
Yuanshun Yao
Jean-François Ton
Hongyi Guo
Andrew Estornell
Yang Liu
HILM
50
18
0
16 Feb 2024
A Knowledge Engineering Primer
Agnieszka Lawrynowicz
14
0
0
26 May 2023
How Language Model Hallucinations Can Snowball
Muru Zhang
Ofir Press
William Merrill
Alisa Liu
Noah A. Smith
HILM
LRM
75
246
0
22 May 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
197
2,232
0
22 Mar 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
145
386
0
15 Mar 2023
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,735
0
24 Feb 2021
1