Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2311.01386
Cited By
v1
v2 (latest)
Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics
Conference on Computational Natural Language Learning (CoNLL), 2023
2 November 2023
Yuhan Zhang
Edward Gibson
Forrest Davis
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics"
2 / 2 papers shown
Title
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
172
5
0
30 Apr 2024
Simple Linguistic Inferences of Large Language Models (LLMs): Blind Spots and Blinds
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
ReLM
LRM
204
8
0
24 May 2023
1