Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2406.03075
Cited By
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework
5 June 2024
Xiaoxi Sun
Jinpeng Li
Yan Zhong
Dongyan Zhao
Rui Yan
LLMAG
HILM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework"
4 / 4 papers shown
Title
MAD-Fact: A Multi-Agent Debate Framework for Long-Form Factuality Evaluation in LLMs
Yucheng Ning
Xixun Lin
Fang Fang
Yanan Cao
HILM
261
0
0
27 Oct 2025
LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions
Xixun Lin
Yucheng Ning
Jingwen Zhang
Yan Dong
Y. Liu
...
Bin Wang
Yanan Cao
Kai-xiang Chen
Songlin Hu
Li Guo
LLMAG
LRM
246
4
0
23 Sep 2025
MAAD: Automate Software Architecture Design through Knowledge-Driven Multi-Agent Collaboration
Ruiyin Li
Yiran Zhang
Xiyu Zhou
Peng Liang
Weisong Sun
Jifeng Xuan
Zhi Jin
Yang Liu
103
0
0
28 Jul 2025
FIRE: Fact-checking with Iterative Retrieval and Verification
North American Chapter of the Association for Computational Linguistics (NAACL), 2024
Zhuohan Xie
Daniil Vasilev
Yuxia Wang
Fauzan Farooqui
Hasan Iqbal
Dhruv Sahnan
Iryna Gurevych
Preslav Nakov
HILM
372
19
0
17 Oct 2024
1