Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.01386
Cited By
Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics
2 November 2023
Yuhan Zhang
Edward Gibson
Forrest Davis
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics"
4 / 4 papers shown
Title
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
24
3
0
30 Apr 2024
Simple Linguistic Inferences of Large Language Models (LLMs): Blind Spots and Blinds
Victoria Basmov
Yoav Goldberg
Reut Tsarfaty
ReLM
LRM
19
5
0
24 May 2023
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Suhas Arehalli
Brian Dillon
Tal Linzen
26
35
0
21 Oct 2022
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
80
62
0
14 Sep 2021
1