ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.03075
  4. Cited By
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent
  Debate Framework

Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework

5 June 2024
Xiaoxi Sun
Jinpeng Li
Yan Zhong
Dongyan Zhao
Rui Yan
    LLMAG
    HILM
ArXivPDFHTML

Papers citing "Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework"

2 / 2 papers shown
Title
MAMM-Refine: A Recipe for Improving Faithfulness in Generation with Multi-Agent Collaboration
MAMM-Refine: A Recipe for Improving Faithfulness in Generation with Multi-Agent Collaboration
David Wan
Justin Chih-Yao Chen
Elias Stengel-Eskin
Mohit Bansal
LLMAG
LRM
60
1
0
19 Mar 2025
Entity-Based Knowledge Conflicts in Question Answering
Entity-Based Knowledge Conflicts in Question Answering
Shayne Longpre
Kartik Perisetla
Anthony Chen
Nikhil Ramesh
Chris DuBois
Sameer Singh
HILM
235
236
0
10 Sep 2021
1