23
9

Argumentative Large Language Models for Explainable and Contestable Claim Verification

Abstract

The profusion of knowledge encoded in large language models (LLMs) and their ability to apply this knowledge zero-shot in a range of settings makes them promising candidates for use in decision-making. However, they are currently limited by their inability to provide outputs which can be faithfully explained and effectively contested to correct mistakes. In this paper, we attempt to reconcile these strengths and weaknesses by introducing \emph{argumentative LLMs (ArgLLMs)}, a method for augmenting LLMs with argumentative reasoning. Concretely, ArgLLMs construct argumentation frameworks, which then serve as the basis for formal reasoning in support of decision-making. The interpretable nature of these argumentation frameworks and formal reasoning means that any decision made by ArgLLMs may be explained and contested. We evaluate ArgLLMs' performance experimentally in comparison with state-of-the-art techniques, in the context of the decision-making task of claim verification. We also define novel properties to characterise contestability and assess ArgLLMs formally in terms of these properties.

View on arXiv
@article{freedman2025_2405.02079,
  title={ Argumentative Large Language Models for Explainable and Contestable Claim Verification },
  author={ Gabriel Freedman and Adam Dejl and Deniz Gorur and Xiang Yin and Antonio Rago and Francesca Toni },
  journal={arXiv preprint arXiv:2405.02079},
  year={ 2025 }
}
Comments on this paper