ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.01386
  4. Cited By
Can Language Models Be Tricked by Language Illusions? Easier with
  Syntax, Harder with Semantics
v1v2 (latest)

Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics

Conference on Computational Natural Language Learning (CoNLL), 2023
2 November 2023
Yuhan Zhang
Edward Gibson
Forrest Davis
ArXiv (abs)PDFHTML

Papers citing "Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics"

2 / 2 papers shown
Title
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting
  Human Language Comprehension Metrics
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
172
5
0
30 Apr 2024
Simple Linguistic Inferences of Large Language Models (LLMs): Blind
  Spots and Blinds
Simple Linguistic Inferences of Large Language Models (LLMs): Blind Spots and Blinds
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
ReLMLRM
204
8
0
24 May 2023
1