Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2304.11389
Cited By
Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens
22 April 2023
Byung-Doh Oh
William Schuler
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens"
6 / 6 papers shown
Title
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
40
0
0
29 Apr 2025
On the Role of Context in Reading Time Prediction
Andreas Opedal
Eleanor Chodroff
Ryan Cotterell
Ethan Gotlieb Wilcox
23
7
0
12 Sep 2024
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
25
13
0
24 May 2024
Temperature-scaling surprisal estimates improve fit to human reading times -- but does it do so for the "right reasons"?
Tong Liu
Iza vSkrjanec
Vera Demberg
30
4
0
15 Nov 2023
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
39
28
0
23 May 2022
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
245
1,977
0
31 Dec 2020
1