ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.00042
  4. Cited By
What do RNN Language Models Learn about Filler-Gap Dependencies?

What do RNN Language Models Learn about Filler-Gap Dependencies?

31 August 2018
Ethan Gotlieb Wilcox
R. Levy
Takashi Morita
Richard Futrell
    LRM
ArXivPDFHTML

Papers citing "What do RNN Language Models Learn about Filler-Gap Dependencies?"

29 / 29 papers shown
Title
Language Models at the Syntax-Semantics Interface: A Case Study of the Long-Distance Binding of Chinese Reflexive ziji
Language Models at the Syntax-Semantics Interface: A Case Study of the Long-Distance Binding of Chinese Reflexive ziji
Xiulin Yang
40
0
0
02 Apr 2025
Language Models Fail to Introspect About Their Knowledge of Language
Siyuan Song
Jennifer Hu
Kyle Mahowald
LRM
KELM
HILM
ELM
84
2
0
10 Mar 2025
Language Models Largely Exhibit Human-like Constituent Ordering Preferences
Language Models Largely Exhibit Human-like Constituent Ordering Preferences
Ada Defne Tur
Gaurav Kamath
Siva Reddy
61
0
0
08 Feb 2025
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
42
13
0
27 Jul 2024
RuBLiMP: Russian Benchmark of Linguistic Minimal Pairs
RuBLiMP: Russian Benchmark of Linguistic Minimal Pairs
Ekaterina Taktasheva
Maxim Bazhukov
Kirill Koncha
Alena Fenogenova
Ekaterina Artemova
Vladislav Mikhailov
42
9
0
27 Jun 2024
Language in Vivo vs. in Silico: Size Matters but Larger Language Models
  Still Do Not Comprehend Language on a Par with Humans
Language in Vivo vs. in Silico: Size Matters but Larger Language Models Still Do Not Comprehend Language on a Par with Humans
Vittoria Dentella
Fritz Guenther
Evelina Leivada
ELM
49
1
0
23 Apr 2024
Decoding Probing: Revealing Internal Linguistic Structures in Neural
  Language Models using Minimal Pairs
Decoding Probing: Revealing Internal Linguistic Structures in Neural Language Models using Minimal Pairs
Linyang He
Peili Chen
Ercong Nie
Yuanning Li
Jonathan R. Brennan
36
6
0
26 Mar 2024
MELA: Multilingual Evaluation of Linguistic Acceptability
MELA: Multilingual Evaluation of Linguistic Acceptability
Ziyin Zhang
Yikang Liu
Wei Huang
Junyu Mao
Rui Wang
Hai Hu
30
3
0
15 Nov 2023
A Method for Studying Semantic Construal in Grammatical Constructions
  with Interpretable Contextual Embedding Spaces
A Method for Studying Semantic Construal in Grammatical Constructions with Interpretable Contextual Embedding Spaces
Gabriella Chronis
Kyle Mahowald
K. Erk
18
8
0
29 May 2023
Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics
  Interface of LMs Through Agentivity
Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics Interface of LMs Through Agentivity
Lindia Tjuatja
Emmy Liu
Lori S. Levin
Graham Neubig
38
2
0
29 May 2023
Dissociating language and thought in large language models
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
29
209
0
16 Jan 2023
Counteracts: Testing Stereotypical Representation in Pre-trained
  Language Models
Counteracts: Testing Stereotypical Representation in Pre-trained Language Models
Damin Zhang
Julia Taylor Rayz
Romila Pradhan
42
2
0
11 Jan 2023
Representing Affect Information in Word Embeddings
Representing Affect Information in Word Embeddings
Yuhan Zhang
Wenqi Chen
Ruihan Zhang
Xiajie Zhang
CVBM
57
3
0
21 Sep 2022
Assessing the Limits of the Distributional Hypothesis in Semantic
  Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences
Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences
Mark Anderson
Jose Camacho-Collados
35
0
0
16 May 2022
When a sentence does not introduce a discourse entity, Transformer-based
  models still sometimes refer to it
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
Sebastian Schuster
Tal Linzen
13
25
0
06 May 2022
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive
  Bias to Sequence-to-sequence Models
Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Aaron Mueller
Robert Frank
Tal Linzen
Luheng Wang
Sebastian Schuster
AIMat
19
33
0
17 Mar 2022
Sparse Interventions in Language Models with Differentiable Masking
Sparse Interventions in Language Models with Differentiable Masking
Nicola De Cao
Leon Schmid
Dieuwke Hupkes
Ivan Titov
40
27
0
13 Dec 2021
Transformers in the loop: Polarity in neural models of language
Transformers in the loop: Polarity in neural models of language
Lisa Bylinina
Alexey Tikhonov
38
0
0
08 Sep 2021
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Isabel Papadimitriou
Ethan A. Chi
Richard Futrell
Kyle Mahowald
27
44
0
26 Jan 2021
Can neural networks acquire a structural bias from raw linguistic data?
Can neural networks acquire a structural bias from raw linguistic data?
Alex Warstadt
Samuel R. Bowman
AI4CE
20
53
0
14 Jul 2020
Mechanisms for Handling Nested Dependencies in Neural-Network Language
  Models and Humans
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans
Yair Lakretz
Dieuwke Hupkes
A. Vergallito
Marco Marelli
Marco Baroni
S. Dehaene
10
62
0
19 Jun 2020
A Systematic Assessment of Syntactic Generalization in Neural Language
  Models
A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
ELM
35
212
0
07 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings
  for information about surrounding words
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
51
42
0
04 May 2020
Using Priming to Uncover the Organization of Syntactic Representations
  in Neural Language Models
Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
Grusha Prasad
Marten van Schijndel
Tal Linzen
40
51
0
23 Sep 2019
Analysing Neural Language Models: Contextual Decomposition Reveals
  Default Reasoning in Number and Gender Assignment
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
Jaap Jumelet
Willem H. Zuidema
Dieuwke Hupkes
LRM
33
37
0
19 Sep 2019
Linguistic Knowledge and Transferability of Contextual Representations
Linguistic Knowledge and Transferability of Contextual Representations
Nelson F. Liu
Matt Gardner
Yonatan Belinkov
Matthew E. Peters
Noah A. Smith
52
718
0
21 Mar 2019
Neural Language Models as Psycholinguistic Subjects: Representations of
  Syntactic State
Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State
Richard Futrell
Ethan Gotlieb Wilcox
Takashi Morita
Peng Qian
Miguel Ballesteros
R. Levy
MILM
42
191
0
08 Mar 2019
Structural Supervision Improves Learning of Non-Local Grammatical
  Dependencies
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies
Ethan Gotlieb Wilcox
Peng Qian
Richard Futrell
Miguel Ballesteros
R. Levy
26
55
0
03 Mar 2019
Neural Network Acceptability Judgments
Neural Network Acceptability Judgments
Alex Warstadt
Amanpreet Singh
Samuel R. Bowman
83
1,372
0
31 May 2018
1