ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04271
38
0

On Fact and Frequency: LLM Responses to Misinformation Expressed with Uncertainty

6 March 2025
Yana van de Sande
Gunes Açar
Thabo van Woudenberg
Martha Larson
ArXivPDFHTML
Abstract

We study LLM judgments of misinformation expressed with uncertainty. Our experiments study the response of three widely used LLMs (GPT-4o, LlaMA3, DeepSeek-v2) to misinformation propositions that have been verified false and then are transformed into uncertain statements according to an uncertainty typology. Our results show that after transformation, LLMs change their factchecking classification from false to not-false in 25% of the cases. Analysis reveals that the change cannot be explained by predictors to which humans are expected to be sensitive, i.e., modality, linguistic cues, or argumentation strategy. The exception is doxastic transformations, which use linguistic cue phrases such as "It is believed ...".To gain further insight, we prompt the LLM to make another judgment about the transformed misinformation statements that is not related to truth value. Specifically, we study LLM estimates of the frequency with which people make the uncertain statement. We find a small but significant correlation between judgment of fact and estimation of frequency.

View on arXiv
@article{sande2025_2503.04271,
  title={ On Fact and Frequency: LLM Responses to Misinformation Expressed with Uncertainty },
  author={ Yana van de Sande and Gunes Açar and Thabo van Woudenberg and Martha Larson },
  journal={arXiv preprint arXiv:2503.04271},
  year={ 2025 }
}
Comments on this paper