ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.03075
  4. Cited By
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent
  Debate Framework

Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework

5 June 2024
Xiaoxi Sun
Jinpeng Li
Yan Zhong
Dongyan Zhao
Rui Yan
    LLMAGHILM
ArXiv (abs)PDFHTML

Papers citing "Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework"

4 / 4 papers shown
Title
MAD-Fact: A Multi-Agent Debate Framework for Long-Form Factuality Evaluation in LLMs
MAD-Fact: A Multi-Agent Debate Framework for Long-Form Factuality Evaluation in LLMs
Yucheng Ning
Xixun Lin
Fang Fang
Yanan Cao
HILM
261
0
0
27 Oct 2025
LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions
LLM-based Agents Suffer from Hallucinations: A Survey of Taxonomy, Methods, and Directions
Xixun Lin
Yucheng Ning
Jingwen Zhang
Yan Dong
Y. Liu
...
Bin Wang
Yanan Cao
Kai-xiang Chen
Songlin Hu
Li Guo
LLMAGLRM
246
4
0
23 Sep 2025
MAAD: Automate Software Architecture Design through Knowledge-Driven Multi-Agent Collaboration
MAAD: Automate Software Architecture Design through Knowledge-Driven Multi-Agent Collaboration
Ruiyin Li
Yiran Zhang
Xiyu Zhou
Peng Liang
Weisong Sun
Jifeng Xuan
Zhi Jin
Yang Liu
103
0
0
28 Jul 2025
FIRE: Fact-checking with Iterative Retrieval and Verification
FIRE: Fact-checking with Iterative Retrieval and VerificationNorth American Chapter of the Association for Computational Linguistics (NAACL), 2024
Zhuohan Xie
Daniil Vasilev
Yuxia Wang
Fauzan Farooqui
Hasan Iqbal
Dhruv Sahnan
Iryna Gurevych
Preslav Nakov
HILM
372
19
0
17 Oct 2024
1