ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.12187
  4. Cited By
Syntactic Surprisal From Neural Models Predicts, But Underestimates,
  Human Processing Difficulty From Syntactic Ambiguities

Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities

21 October 2022
Suhas Arehalli
Brian Dillon
Tal Linzen
ArXivPDFHTML

Papers citing "Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities"

4 / 4 papers shown
Title
Filtered Corpus Training (FiCT) Shows that Language Models can
  Generalize from Indirect Evidence
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
23
13
0
24 May 2024
Information Value: Measuring Utterance Predictability as Distance from
  Plausible Alternatives
Information Value: Measuring Utterance Predictability as Distance from Plausible Alternatives
Mario Giulianelli
Sarenne Wallbridge
Raquel Fernández
12
13
0
20 Oct 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
21
205
0
16 Jan 2023
Testing the limits of natural language models for predicting human
  language judgments
Testing the limits of natural language models for predicting human language judgments
Tal Golan
Matthew Siegelman
N. Kriegeskorte
Christopher A. Baldassano
19
14
0
07 Apr 2022
1