11
0

From Text to Graph: Leveraging Graph Neural Networks for Enhanced Explainability in NLP

Abstract

Researchers have relegated natural language processing tasks to Transformer-type models, particularly generative models, because these models exhibit high versatility when performing generation and classification tasks. As the size of these models increases, they achieve outstanding results. Given their widespread use, many explainability techniques are developed based on these models. However, this process becomes computationally expensive due to the large size of the models. Additionally, transformers interpret input information through tokens that fragment input words into sequences lacking inherent semantic meaning, complicating the explanation of the model from the very beginning. This study proposes a novel methodology to achieve explainability in natural language processing tasks by automatically converting sentences into graphs and maintaining semantics through nodes and relations that express fundamental linguistic concepts. It also allows the subsequent exploitation of this knowledge in subsequent tasks, making it possible to obtain trends and understand how the model associates the different elements inside the text with the explained task. The experiments delivered promising results in determining the most critical components within the text structure for a given classification.

View on arXiv
@article{yáñez-romero2025_2504.02064,
  title={ From Text to Graph: Leveraging Graph Neural Networks for Enhanced Explainability in NLP },
  author={ Fabio Yáñez-Romero and Andrés Montoyo and Armando Suárez and Yoan Gutiérrez and Ruslan Mitkov },
  journal={arXiv preprint arXiv:2504.02064},
  year={ 2025 }
}
Comments on this paper