72
0

AILS-NTUA at SemEval-2025 Task 3: Leveraging Large Language Models and Translation Strategies for Multilingual Hallucination Detection

Abstract

Multilingual hallucination detection stands as an underexplored challenge, which the Mu-SHROOM shared task seeks to address. In this work, we propose an efficient, training-free LLM prompting strategy that enhances detection by translating multilingual text spans into English. Our approach achieves competitive rankings across multiple languages, securing two first positions in low-resource languages. The consistency of our results highlights the effectiveness of our translation strategy for hallucination detection, demonstrating its applicability regardless of the source language.

View on arXiv
@article{karkani2025_2503.02442,
  title={ AILS-NTUA at SemEval-2025 Task 3: Leveraging Large Language Models and Translation Strategies for Multilingual Hallucination Detection },
  author={ Dimitra Karkani and Maria Lymperaiou and Giorgos Filandrianos and Nikolaos Spanos and Athanasios Voulodimos and Giorgos Stamou },
  journal={arXiv preprint arXiv:2503.02442},
  year={ 2025 }
}
Comments on this paper