ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06775
21
0

They want to pretend not to understand: The Limits of Current LLMs in Interpreting Implicit Content of Political Discourse

7 June 2025
Walter Paci
Alessandro Panunzi
Sandro Pezzelle
ArXiv (abs)PDFHTML
Main:8 Pages
6 Figures
Bibliography:3 Pages
5 Tables
Appendix:14 Pages
Abstract

Implicit content plays a crucial role in political discourse, where speakers systematically employ pragmatic strategies such as implicatures and presuppositions to influence their audiences. Large Language Models (LLMs) have demonstrated strong performance in tasks requiring complex semantic and pragmatic understanding, highlighting their potential for detecting and explaining the meaning of implicit content. However, their ability to do this within political discourse remains largely underexplored. Leveraging, for the first time, the large IMPAQTS corpus, which comprises Italian political speeches with the annotation of manipulative implicit content, we propose methods to test the effectiveness of LLMs in this challenging problem. Through a multiple-choice task and an open-ended generation task, we demonstrate that all tested models struggle to interpret presuppositions and implicatures. We conclude that current LLMs lack the key pragmatic capabilities necessary for accurately interpreting highly implicit language, such as that found in political discourse. At the same time, we highlight promising trends and future directions for enhancing model performance. We release our data and code atthis https URL

View on arXiv
@article{paci2025_2506.06775,
  title={ They want to pretend not to understand: The Limits of Current LLMs in Interpreting Implicit Content of Political Discourse },
  author={ Walter Paci and Alessandro Panunzi and Sandro Pezzelle },
  journal={arXiv preprint arXiv:2506.06775},
  year={ 2025 }
}
Comments on this paper