ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22395
39
0

Negation: A Pink Elephant in the Large Language Models' Room?

28 March 2025
Tereza Vrabcová
Marek Kadlcík
Petr Sojka
Michal Štefánik
Michal Spiegel
ArXivPDFHTML
Abstract

Negations are key to determining sentence meaning, making them essential for logical reasoning. Despite their importance, negations pose a substantial challenge for large language models (LLMs) and remain underexplored.We construct two multilingual natural language inference (NLI) datasets with \textit{paired} examples differing in negation. We investigate how model size and language impact its ability to handle negation correctly by evaluating popular LLMs.Contrary to previous work, we show that increasing the model size consistently improves the models' ability to handle negations. Furthermore, we find that both the models' reasoning accuracy and robustness to negation are language-dependent and that the length and explicitness of the premise have a greater impact on robustness than language.Our datasets can facilitate further research and improvements of language model reasoning in multilingual settings.

View on arXiv
@article{vrabcová2025_2503.22395,
  title={ Negation: A Pink Elephant in the Large Language Models' Room? },
  author={ Tereza Vrabcová and Marek Kadlčík and Petr Sojka and Michal Štefánik and Michal Spiegel },
  journal={arXiv preprint arXiv:2503.22395},
  year={ 2025 }
}
Comments on this paper