ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.02737
62
1

Large Language Models for Multilingual Previously Fact-Checked Claim Detection

4 March 2025
Ivan Vykopal
Matúš Pikuliak
Simon Ostermann
Tatiana Anikina
Michal Gregor
Marián Simko
    HILM
    LRM
ArXivPDFHTML
Abstract

In our era of widespread false information, human fact-checkers often face the challenge of duplicating efforts when verifying claims that may have already been addressed in other countries or languages. As false information transcends linguistic boundaries, the ability to automatically detect previously fact-checked claims across languages has become an increasingly important task. This paper presents the first comprehensive evaluation of large language models (LLMs) for multilingual previously fact-checked claim detection. We assess seven LLMs across 20 languages in both monolingual and cross-lingual settings. Our results show that while LLMs perform well for high-resource languages, they struggle with low-resource languages. Moreover, translating original texts into English proved to be beneficial for low-resource languages. These findings highlight the potential of LLMs for multilingual previously fact-checked claim detection and provide a foundation for further research on this promising application of LLMs.

View on arXiv
@article{vykopal2025_2503.02737,
  title={ Large Language Models for Multilingual Previously Fact-Checked Claim Detection },
  author={ Ivan Vykopal and Matúš Pikuliak and Simon Ostermann and Tatiana Anikina and Michal Gregor and Marián Šimko },
  journal={arXiv preprint arXiv:2503.02737},
  year={ 2025 }
}
Comments on this paper