ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.09648
  4. Cited By
Different kinds of cognitive plausibility: why are transformers better
  than RNNs at predicting N400 amplitude?

Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

20 July 2021
J. Michaelov
Megan D. Bardolph
S. Coulson
Benjamin Bergen
ArXivPDFHTML

Papers citing "Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?"

11 / 11 papers shown
Title
Decomposition of surprisal: Unified computational model of ERP
  components in language processing
Decomposition of surprisal: Unified computational model of ERP components in language processing
Jiaxuan Li
Richard Futrell
28
1
0
10 Sep 2024
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting
  Human Language Comprehension Metrics
Revenge of the Fallen? Recurrent Models Match Transformers at Predicting Human Language Comprehension Metrics
J. Michaelov
Catherine Arnett
Benjamin Bergen
32
3
0
30 Apr 2024
Psychometric Predictive Power of Large Language Models
Psychometric Predictive Power of Large Language Models
Tatsuki Kuribayashi
Yohei Oseki
Timothy Baldwin
LM&MA
24
3
0
13 Nov 2023
When Language Models Fall in Love: Animacy Processing in Transformer
  Language Models
When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Michael Hanna
Yonatan Belinkov
Sandro Pezzelle
22
11
0
23 Oct 2023
Are words equally surprising in audio and audio-visual comprehension?
Are words equally surprising in audio and audio-visual comprehension?
Pranava Madhyastha
Ye Zhang
G. Vigliocco
14
1
0
14 Jul 2023
Can Peanuts Fall in Love with Distributional Semantics?
Can Peanuts Fall in Love with Distributional Semantics?
J. Michaelov
S. Coulson
Benjamin Bergen
MILM
21
8
0
20 Jan 2023
Rarely a problem? Language models exhibit inverse scaling in their
  predictions following few-type quantifiers
Rarely a problem? Language models exhibit inverse scaling in their predictions following few-type quantifiers
J. Michaelov
Benjamin Bergen
17
17
0
16 Dec 2022
Collateral facilitation in humans and language models
Collateral facilitation in humans and language models
J. Michaelov
Benjamin Bergen
17
11
0
09 Nov 2022
Context Limitations Make Neural Language Models More Human-Like
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi
Yohei Oseki
Ana Brassard
Kentaro Inui
44
29
0
23 May 2022
Testing the limits of natural language models for predicting human
  language judgments
Testing the limits of natural language models for predicting human language judgments
Tal Golan
Matthew Siegelman
N. Kriegeskorte
Christopher A. Baldassano
22
15
0
07 Apr 2022
So Cloze yet so Far: N400 Amplitude is Better Predicted by
  Distributional Information than Human Predictability Judgements
So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
J. Michaelov
S. Coulson
Benjamin Bergen
16
41
0
02 Sep 2021
1