ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.12112
50
18
v1v2 (latest)

Non-entailed subsequences as a challenge for natural language inference

29 November 2018
R. Thomas McCoy
Tal Linzen
ArXiv (abs)PDFHTML
Abstract

Neural network models have shown great success at natural language inference (NLI), the task of determining whether a premise entails a hypothesis. However, recent studies suggest that these models may rely on fallible heuristics rather than deep language understanding. We introduce a challenge set to test whether NLI systems adopt one such heuristic: assuming that a sentence entails all of its subsequences, such as assuming that "Alice believes Mary is lying" entails "Alice believes Mary." We evaluate several competitive NLI models on this challenge set and find strong evidence that they do rely on the subsequence heuristic.

View on arXiv
Comments on this paper