Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.19178
Cited By
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
30 April 2024
J. Michaelov
Catherine Arnett
Benjamin Bergen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics"
4 / 4 papers shown
Title
Frequency Explains the Inverse Correlation of Large Language Models' Size, Training Data Amount, and Surprisal's Fit to Reading Times
Byung-Doh Oh
Shisen Yue
William Schuler
27
13
0
03 Feb 2024
Accounting for Agreement Phenomena in Sentence Comprehension with Transformer Language Models: Effects of Similarity-based Interference on Surprisal and Attention
S. Ryu
Richard L. Lewis
31
25
0
26 Apr 2021
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
245
1,977
0
31 Dec 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
220
4,424
0
23 Jan 2020
1