ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.02935
  4. Cited By
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual
  Checking

KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking

3 April 2024
Jiawei Zhang
Chejian Xu
Y. Gai
Freddy Lecue
Dawn Song
Yue Liu
    HILM
ArXiv (abs)PDFHTML

Papers citing "KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking"

11 / 11 papers shown
JointCQ: Improving Factual Hallucination Detection with Joint Claim and Query Generation
JointCQ: Improving Factual Hallucination Detection with Joint Claim and Query Generation
F. Xu
Huixuan Zhang
Zhenliang Zhang
Jiahao Wang
Xiaojun Wan
HILM
201
0
0
22 Oct 2025
Large Language Models Hallucination: A Comprehensive Survey
Large Language Models Hallucination: A Comprehensive Survey
Aisha Alansari
Hamzah Luqman
HILMLRM
461
1
0
05 Oct 2025
Intent-Driven Storage Systems: From Low-Level Tuning to High-Level Understanding
Intent-Driven Storage Systems: From Low-Level Tuning to High-Level Understanding
Shai Bergman
Won Wook Song
Lukas Cavigelli
Konstantin Berestizshevsky
Ke Zhou
Ji Zhang
100
0
0
29 Sep 2025
Beyond Facts: Evaluating Intent Hallucination in Large Language Models
Beyond Facts: Evaluating Intent Hallucination in Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Yijie Hao
Haofei Yu
Jiaxuan You
HILMLRM
174
4
0
06 Jun 2025
UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning
UDora: A Unified Red Teaming Framework against LLM Agents by Dynamically Hijacking Their Own Reasoning
J.N. Zhang
Shuang Yang
B. Li
AAMLLLMAG
355
7
0
28 Feb 2025
SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models
SafeAuto: Knowledge-Enhanced Safe Autonomous Driving with Multimodal Foundation Models
J.N. Zhang
Xuan Yang
Tianfu Wang
Yu Yao
Aleksandr Petiushko
B. Li
451
11
0
28 Feb 2025
Decoding Knowledge in Large Language Models: A Framework for Categorization and Comprehension
Yanbo Fang
Ruixiang Tang
ELM
217
0
0
03 Jan 2025
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language
  Models
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Yuzhe Gu
Ziwei Ji
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai Chen
HILM
215
12
0
05 Jul 2024
A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
A Probabilistic Framework for LLM Hallucination Detection via Belief Tree Propagation
Bairu Hou
Yang Zhang
Jacob Andreas
Shiyu Chang
299
18
0
11 Jun 2024
CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks
CheckEmbed: Effective Verification of LLM Solutions to Open-Ended Tasks
Maciej Besta
Lorenzo Paleari
Marcin Copik
Robert Gerstenberger
Aleš Kubíček
...
Eric Schreiber
Torsten Hoefler
Tomasz Lehmann
H. Niewiadomski
Torsten Hoefler
683
11
0
04 Jun 2024
Detecting Multimedia Generated by Large AI Models: A Survey
Detecting Multimedia Generated by Large AI Models: A Survey
Li Lin
Neeraj Gupta
Yue Zhang
Hainan Ren
Chun-Hao Liu
Feng Ding
Xin Eric Wang
Xin Li
Luisa Verdoliva
Shu Hu
896
89
0
22 Jan 2024
1