Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.12131
Cited By
Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?
23 December 2022
Byung-Doh Oh
William Schuler
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times?"
1 / 51 papers shown
Title
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
S. Ryu
Richard L. Lewis
39
25
0
26 Apr 2021
Previous
1
2