407
v1v2v3 (latest)

OpenFactCheck: A Unified Framework for Factuality Evaluation of LLMs

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Main:7 Pages
5 Figures
Bibliography:2 Pages
3 Tables
Appendix:2 Pages
Abstract

The increased use of large language models (LLMs) across a variety of real-world applications calls for automatic tools to check the factual accuracy of their outputs, as LLMs often hallucinate. This is difficult as it requires assessing the factuality of free-form open-domain responses. While there has been a lot of research on this topic, different papers use different evaluation benchmarks and measures, which makes them hard to compare and hampers future progress. To mitigate these issues, we developed OpenFactCheck, a unified framework, with three modules: (i) RESPONSEEVAL, which allows users to easily customize an automatic fact-checking system and to assess the factuality of all claims in an input document using that system, (ii) LLMEVAL, which assesses the overall factuality of an LLM, and (iii) CHECKEREVAL, a module to evaluate automatic fact-checking systems. OpenFactCheck is open-sourced (this https URL) and publicly released as a Python library (this https URL) and also as a web service (this http URL). A video describing the system is available atthis https URL.

View on arXiv
Comments on this paper