37
90

Mapping Trustworthiness in Large Language Models: A Bibliometric Analysis Bridging Theory to Practice

Abstract

The rapid proliferation of Large Language Models (LLMs) has raised significant trustworthiness and ethical concerns. Despite the widespread adoption of LLMs across domains, there is still no clear consensus on how to define and operationalise trustworthiness. This study aims to bridge the gap between theoretical discussion and practical implementation by analysing research trends, definitions of trustworthiness, and practical techniques. We conducted a bibliometric mapping analysis of 2,006 publications from Web of Science (2019-2025) using the Bibliometrix, and manually reviewed 68 papers. We found a shift from traditional AI ethics discussion to LLM trustworthiness frameworks. We identified 18 different definitions of trust/trustworthiness, with transparency, explainability and reliability emerging as the most common dimensions. We identified 20 strategies to enhance LLM trustworthiness, with fine-tuning and retrieval-augmented generation (RAG) being the most prominent. Most of the strategies are developer-driven and applied during the post-training phase. Several authors propose fragmented terminologies rather than unified frameworks, leading to the risks of "ethics washing," where ethical discourse is adopted without a genuine regulatory commitment. Our findings highlight: persistent gaps between theoretical taxonomies and practical implementation, the crucial role of the developer in operationalising trust, and call for standardised frameworks and stronger regulatory measures to enable trustworthy and ethical deployment of LLMs.

View on arXiv
@article{cerqueira2025_2503.04785,
  title={ Mapping Trustworthiness in Large Language Models: A Bibliometric Analysis Bridging Theory to Practice },
  author={ José Siqueira de Cerqueira and Kai-Kristian Kemell and Rebekah Rousi and Nannan Xi and Juho Hamari and Pekka Abrahamsson },
  journal={arXiv preprint arXiv:2503.04785},
  year={ 2025 }
}
Comments on this paper