Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.11098
Cited By
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans
19 June 2020
Yair Lakretz
Dieuwke Hupkes
A. Vergallito
Marco Marelli
Marco Baroni
S. Dehaene
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans"
15 / 15 papers shown
Title
Learned feature representations are biased by complexity, learning order, position, and more
Andrew Kyle Lampinen
Stephanie C. Y. Chan
Katherine Hermann
AI4CE
FaML
SSL
OOD
40
6
0
09 May 2024
Language in Vivo vs. in Silico: Size Matters but Larger Language Models Still Do Not Comprehend Language on a Par with Humans
Vittoria Dentella
Fritz Guenther
Evelina Leivada
ELM
49
1
0
23 Apr 2024
Causal interventions expose implicit situation models for commonsense language understanding
Takateru Yamakoshi
James L. McClelland
A. Goldberg
Robert D. Hawkins
25
6
0
06 Jun 2023
Language acquisition: do children and language models follow similar learning stages?
Linnea Evanson
Yair Lakretz
J. King
30
27
0
06 Jun 2023
Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context
Alexandre Pasquiou
Yair Lakretz
B. Thirion
Christophe Pallier
19
16
0
28 Feb 2023
Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models
Peter Hase
Joey Tianyi Zhou
Been Kim
Asma Ghandeharioun
MILM
48
167
0
10 Jan 2023
Do LSTMs See Gender? Probing the Ability of LSTMs to Learn Abstract Syntactic Rules
Priyanka Sukumaran
Conor J. Houghton
N. Kazanina
19
4
0
31 Oct 2022
State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes
Mario Giulianelli
Verna Dankers
Mikel Artetxe
Yanai Elazar
...
Leila Khalatbari
Maria Ryskina
Rita Frieske
Ryan Cotterell
Zhijing Jin
129
95
0
06 Oct 2022
Blackbird's language matrices (BLMs): a new benchmark to investigate disentangled generalisation in neural networks
Paola Merlo
A. An
M. A. Rodriguez
36
9
0
22 May 2022
Sparse Interventions in Language Models with Differentiable Masking
Nicola De Cao
Leon Schmid
Dieuwke Hupkes
Ivan Titov
40
27
0
13 Dec 2021
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans
Yair Lakretz
T. Desbordes
Dieuwke Hupkes
S. Dehaene
233
11
0
14 Oct 2021
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
61
38
0
30 Sep 2021
On the proper role of linguistically-oriented deep net analysis in linguistic theorizing
Marco Baroni
18
51
0
16 Jun 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
45
244
0
14 Apr 2021
Can RNNs learn Recursive Nested Subject-Verb Agreements?
Yair Lakretz
T. Desbordes
J. King
Benoît Crabbé
Maxime Oquab
S. Dehaene
160
19
0
06 Jan 2021
1