Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.11138
Cited By
Colorless green recurrent networks dream hierarchically
29 March 2018
Kristina Gulordava
Piotr Bojanowski
Edouard Grave
Tal Linzen
Marco Baroni
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Colorless green recurrent networks dream hierarchically"
50 / 285 papers shown
Title
A multilabel approach to morphosyntactic probing
Naomi Tachikawa Shapiro
Amandalynne Paullada
Shane Steinert-Threlkeld
39
10
0
17 Apr 2021
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
Sebastian Ruder
Noah Constant
Jan A. Botha
Aditya Siddhant
Orhan Firat
...
Pengfei Liu
Junjie Hu
Dan Garrette
Graham Neubig
Melvin Johnson
ELM
AAML
LRM
29
184
0
15 Apr 2021
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Koustuv Sinha
Robin Jia
Dieuwke Hupkes
J. Pineau
Adina Williams
Douwe Kiela
45
245
0
14 Apr 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
34
8
0
02 Mar 2021
Vyākarana: A Colorless Green Benchmark for Syntactic Evaluation in Indic Languages
Rajaswa Patil
Jasleen Dhillon
Siddhant Mahurkar
Saumitra Kulkarni
M. Malhotra
V. Baths
23
1
0
01 Mar 2021
Language Modelling as a Multi-Task Problem
Leon Weber
Jaap Jumelet
Elia Bruni
Dieuwke Hupkes
29
13
0
27 Jan 2021
CLiMP: A Benchmark for Chinese Language Model Evaluation
Beilei Xiang
Changbing Yang
Yu Li
Alex Warstadt
Katharina Kann
ALM
25
38
0
26 Jan 2021
Deep Subjecthood: Higher-Order Grammatical Features in Multilingual BERT
Isabel Papadimitriou
Ethan A. Chi
Richard Futrell
Kyle Mahowald
27
44
0
26 Jan 2021
Evaluating Models of Robust Word Recognition with Serial Reproduction
Stephan C. Meylan
Sathvik Nair
Thomas Griffiths
22
4
0
24 Jan 2021
Can RNNs learn Recursive Nested Subject-Verb Agreements?
Yair Lakretz
T. Desbordes
J. King
Benoît Crabbé
Maxime Oquab
S. Dehaene
160
19
0
06 Jan 2021
Recoding latent sentence representations -- Dynamic gradient-based activation modification in RNNs
Dennis Ulmer
38
0
0
03 Jan 2021
Mapping the Timescale Organization of Neural Language Models
H. Chien
Jinhang Zhang
C. Honey
11
2
0
12 Dec 2020
Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis
Michael A. Lepori
R. Thomas McCoy
22
23
0
24 Nov 2020
The Zero Resource Speech Benchmark 2021: Metrics and baselines for unsupervised spoken language modeling
Tu Nguyen
Maureen de Seyssel
Patricia Roze
M. Rivière
Evgeny Kharitonov
Alexei Baevski
Ewan Dunbar
Emmanuel Dupoux
SSL
24
101
0
23 Nov 2020
diagNNose: A Library for Neural Activation Analysis
Jaap Jumelet
AI4CE
25
9
0
13 Nov 2020
CxGBERT: BERT meets Construction Grammar
Harish Tayyar Madabushi
Laurence Romain
Dagmar Divjak
P. Milin
19
40
0
09 Nov 2020
On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages
S. Bhattamishra
Kabir Ahuja
Navin Goyal
ReLM
16
12
0
08 Nov 2020
Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization
Tristan Thrush
Ethan Gotlieb Wilcox
R. Levy
19
14
0
04 Nov 2020
Word Frequency Does Not Predict Grammatical Knowledge in Language Models
Charles Yu
Ryan Sie
Nicolas Tedeschi
Leon Bergen
17
3
0
26 Oct 2020
Learning to Recognize Dialect Features
Dorottya Demszky
D. Sharma
J. Clark
Vinodkumar Prabhakaran
Jacob Eisenstein
123
38
0
23 Oct 2020
A Survey on Recent Approaches for Natural Language Processing in Low-Resource Scenarios
Michael A. Hedderich
Lukas Lange
Heike Adel
Jannik Strötgen
Dietrich Klakow
224
287
0
23 Oct 2020
Explicitly Modeling Syntax in Language Models with Incremental Parsing and a Dynamic Oracle
Songlin Yang
Shawn Tan
Alessandro Sordoni
Siva Reddy
Rameswar Panda
19
5
0
21 Oct 2020
RNNs can generate bounded hierarchical languages with optimal memory
John Hewitt
Michael Hahn
Surya Ganguli
Percy Liang
Christopher D. Manning
LRM
16
51
0
15 Oct 2020
Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models
Ethan Gotlieb Wilcox
Peng Qian
Richard Futrell
Ryosuke Kohita
R. Levy
Miguel Ballesteros
NAI
15
10
0
12 Oct 2020
COGS: A Compositional Generalization Challenge Based on Semantic Interpretation
Najoung Kim
Tal Linzen
CoGe
11
274
0
12 Oct 2020
Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Alex Warstadt
Yian Zhang
Haau-Sing Li
Haokun Liu
Samuel R. Bowman
SSL
AI4CE
37
21
0
11 Oct 2020
Unsupervised Distillation of Syntactic Information from Contextualized Word Representations
Shauli Ravfogel
Yanai Elazar
Jacob Goldberger
Yoav Goldberg
21
12
0
11 Oct 2020
Can RNNs trained on harder subject-verb agreement instances still perform well on easier ones?
Hritik Bansal
Gantavya Bhatt
Sumeet Agarwal
26
0
0
10 Oct 2020
Discourse structure interacts with reference but not syntax in neural language models
Forrest Davis
Marten van Schijndel
8
20
0
10 Oct 2020
How well does surprisal explain N400 amplitude under different experimental conditions?
J. Michaelov
Benjamin Bergen
7
40
0
09 Oct 2020
Recurrent babbling: evaluating the acquisition of grammar from limited input data
Ludovica Pannitto
Aurélie Herbelot
27
13
0
09 Oct 2020
BERTering RAMS: What and How Much does BERT Already Know About Event Arguments? -- A Study on the RAMS Dataset
Varun Gangal
Eduard H. Hovy
22
4
0
08 Oct 2020
Assessing Phrasal Representation and Composition in Transformers
Lang-Chi Yu
Allyson Ettinger
CoGe
22
67
0
08 Oct 2020
Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming
Kanishka Misra
Allyson Ettinger
Julia Taylor Rayz
14
56
0
06 Oct 2020
Intrinsic Probing through Dimension Selection
Lucas Torroba Hennigen
Adina Williams
Ryan Cotterell
28
57
0
06 Oct 2020
LSTMs Compose (and Learn) Bottom-Up
Naomi Saphra
Adam Lopez
CoGe
26
18
0
06 Oct 2020
Investigating representations of verb bias in neural language models
Robert D. Hawkins
Takateru Yamakoshi
Thomas Griffiths
A. Goldberg
20
29
0
05 Oct 2020
Multi-timescale Representation Learning in LSTM Language Models
Shivangi Mahto
Vy A. Vo
Javier S. Turek
Alexander G. Huth
15
29
0
27 Sep 2020
Simple is Better! Lightweight Data Augmentation for Low Resource Slot Filling and Intent Classification
Samuel Louvan
Bernardo Magnini
12
26
0
08 Sep 2020
Can neural networks acquire a structural bias from raw linguistic data?
Alex Warstadt
Samuel R. Bowman
AI4CE
20
53
0
14 Jul 2020
Evaluating German Transformer Language Models with Syntactic Agreement Tests
Karolina Zaczynska
Nils Feldhus
Robert Schwarzenberg
Aleksandra Gabryszak
Sebastian Möller
14
6
0
07 Jul 2020
Universal linguistic inductive biases via meta-learning
R. Thomas McCoy
Erin Grant
P. Smolensky
Thomas Griffiths
Tal Linzen
FedML
16
24
0
29 Jun 2020
Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans
Yair Lakretz
Dieuwke Hupkes
A. Vergallito
Marco Marelli
Marco Baroni
S. Dehaene
21
62
0
19 Jun 2020
How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation
Steffen Eger
Johannes Daxenberger
Iryna Gurevych
25
11
0
16 Jun 2020
How much complexity does an RNN architecture need to learn syntax-sensitive dependencies?
Gantavya Bhatt
Hritik Bansal
Rishu Singh
Sumeet Agarwal
10
5
0
17 May 2020
A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu
Jon Gauthier
Peng Qian
Ethan Gotlieb Wilcox
R. Levy
ELM
35
212
0
07 May 2020
Weakly-Supervised Neural Response Selection from an Ensemble of Task-Specialised Dialogue Agents
Asir Saeed
Khai Mai
Pham Quang Nhat Minh
Nguyen Tuan Duc
Danushka Bollegala
19
0
0
06 May 2020
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Josef Klafka
Allyson Ettinger
56
42
0
04 May 2020
The Sensitivity of Language Models and Humans to Winograd Schema Perturbations
Mostafa Abdou
Vinit Ravishankar
Maria Barrett
Yonatan Belinkov
Desmond Elliott
Anders Søgaard
ReLM
LRM
62
34
0
04 May 2020
From SPMRL to NMRL: What Did We Learn (and Unlearn) in a Decade of Parsing Morphologically-Rich Languages (MRLs)?
Reut Tsarfaty
Dan Bareket
Stav Klein
Amit Seker
26
39
0
04 May 2020
Previous
1
2
3
4
5
6
Next