ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18293
47
0

Fact-checking AI-generated news reports: Can LLMs catch their own lies?

24 March 2025
Jiayi Yao
Haibo Sun
Nianwen Xue
    HILM
ArXivPDFHTML
Abstract

In this paper, we evaluate the ability of Large Language Models (LLMs) to assess the veracity of claims in 'ñews reports'' generated by themselves or other LLMs. Our goal is to determine whether LLMs can effectively fact-check their own content, using methods similar to those used to verify claims made by humans. Our findings indicate that LLMs are more effective at assessing claims in national or international news stories than in local news stories, better at evaluating static information than dynamic information, and better at verifying true claims compared to false ones. We hypothesize that this disparity arises because the former types of claims are better represented in the training data. Additionally, we find that incorporating retrieved results from a search engine in a Retrieval-Augmented Generation (RAG) setting significantly reduces the number of claims an LLM cannot assess. However, this approach also increases the occurrence of incorrect assessments, partly due to irrelevant or low-quality search results. This diagnostic study highlights the need for future research on fact-checking machine-generated reports to prioritize improving the precision and relevance of retrieved information to better support fact-checking efforts. Furthermore, claims about dynamic events and local news may require human-in-the-loop fact-checking systems to ensure accuracy and reliability.

View on arXiv
@article{yao2025_2503.18293,
  title={ Fact-checking AI-generated news reports: Can LLMs catch their own lies? },
  author={ Jiayi Yao and Haibo Sun and Nianwen Xue },
  journal={arXiv preprint arXiv:2503.18293},
  year={ 2025 }
}
Comments on this paper