Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.11138
Cited By
Colorless green recurrent networks dream hierarchically
29 March 2018
Kristina Gulordava
Piotr Bojanowski
Edouard Grave
Tal Linzen
Marco Baroni
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Colorless green recurrent networks dream hierarchically"
50 / 285 papers shown
Title
Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models
Oana Ignat
Zhijing Jin
Artem Abzaliev
Laura Biester
Santiago Castro
...
Verónica Pérez-Rosas
Siqi Shen
Zekun Wang
Winston Wu
Rada Mihalcea
LRM
41
6
0
21 May 2023
Exploring How Generative Adversarial Networks Learn Phonological Representations
Jing Chen
Micha Elsner
GAN
21
3
0
21 May 2023
Large Linguistic Models: Investigating LLMs' metalinguistic abilities
Gašper Beguš
Maksymilian Dąbkowski
Ryan Rhodes
LRM
44
18
0
01 May 2023
The Learnability of In-Context Learning
Noam Wies
Yoav Levine
Amnon Shashua
122
98
0
14 Mar 2023
Do large language models resemble humans in language use?
Zhenguang G. Cai
Xufeng Duan
David A. Haslett
Shuqi Wang
M. Pickering
ALM
81
37
0
10 Mar 2023
Spelling convention sensitivity in neural language models
Elizabeth Nielsen
Christo Kirov
Brian Roark
28
1
0
06 Mar 2023
NxPlain: Web-based Tool for Discovery of Latent Concepts
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
Tamim Jaban
Musab Husaini
Ummar Abbas
25
1
0
06 Mar 2023
A Discerning Several Thousand Judgments: GPT-3 Rates the Article + Adjective + Numeral + Noun Construction
Kyle Mahowald
22
24
0
29 Jan 2023
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech
Aditya Yedetore
Tal Linzen
Robert Frank
R. Thomas McCoy
36
16
0
26 Jan 2023
Dissociating language and thought in large language models
Kyle Mahowald
Anna A. Ivanova
I. Blank
Nancy Kanwisher
J. Tenenbaum
Evelina Fedorenko
ELM
ReLM
34
209
0
16 Jan 2023
Counteracts: Testing Stereotypical Representation in Pre-trained Language Models
Damin Zhang
Julia Taylor Rayz
Romila Pradhan
47
2
0
11 Jan 2023
Assessing the Capacity of Transformer to Abstract Syntactic Representations: A Contrastive Analysis Based on Long-distance Agreement
Bingzhi Li
Guillaume Wisniewski
Benoît Crabbé
69
12
0
08 Dec 2022
Syntactic Substitutability as Unsupervised Dependency Syntax
Jasper Jian
Siva Reddy
27
3
0
29 Nov 2022
Prompting Language Models for Linguistic Structure
Terra Blevins
Hila Gonen
Luke Zettlemoyer
LRM
35
40
0
15 Nov 2022
Collateral facilitation in humans and language models
J. Michaelov
Benjamin Bergen
25
11
0
09 Nov 2022
Do LSTMs See Gender? Probing the Ability of LSTMs to Learn Abstract Syntactic Rules
Priyanka Sukumaran
Conor J. Houghton
N. Kazanina
19
4
0
31 Oct 2022
Characterizing Verbatim Short-Term Memory in Neural Language Models
K. Armeni
C. Honey
Tal Linzen
KELM
RALM
33
3
0
24 Oct 2022
On the Transformation of Latent Space in Fine-Tuned NLP Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Firoj Alam
39
18
0
23 Oct 2022
Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities
Suhas Arehalli
Brian Dillon
Tal Linzen
34
37
0
21 Oct 2022
SLING: Sino Linguistic Evaluation of Large Language Models
Yixiao Song
Kalpesh Krishna
R. Bhatt
Mohit Iyyer
26
8
0
21 Oct 2022
Post-hoc analysis of Arabic transformer models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
15
1
0
18 Oct 2022
Transparency Helps Reveal When Language Models Learn Meaning
Zhaofeng Wu
William Merrill
Hao Peng
Iz Beltagy
Noah A. Smith
24
9
0
14 Oct 2022
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
39
82
0
13 Oct 2022
State-of-the-art generalisation research in NLP: A taxonomy and review
Dieuwke Hupkes
Mario Giulianelli
Verna Dankers
Mikel Artetxe
Yanai Elazar
...
Leila Khalatbari
Maria Ryskina
Rita Frieske
Ryan Cotterell
Zhijing Jin
129
95
0
06 Oct 2022
Are word boundaries useful for unsupervised language learning?
Tu Nguyen
Maureen de Seyssel
Robin Algayres
Patricia Roze
Ewan Dunbar
Emmanuel Dupoux
49
9
0
06 Oct 2022
The boundaries of meaning: a case study in neural machine translation
Yuri Balashov
24
2
0
02 Oct 2022
Subject Verb Agreement Error Patterns in Meaningless Sentences: Humans vs. BERT
Karim Lasri
Olga Seminck
Alessandro Lenci
Thierry Poibeau
32
4
0
21 Sep 2022
Corpus-Guided Contrast Sets for Morphosyntactic Feature Detection in Low-Resource English Varieties
Tessa Masis
A. Neal
Lisa Green
Brendan O'Connor
33
9
0
15 Sep 2022
What Artificial Neural Networks Can Tell Us About Human Language Acquisition
Alex Warstadt
Samuel R. Bowman
27
111
0
17 Aug 2022
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Jean-Philippe Bernardy
Shalom Lappin
65
1
0
11 Aug 2022
The Birth of Bias: A case study on the evolution of gender bias in an English language model
Oskar van der Wal
Jaap Jumelet
K. Schulz
Willem H. Zuidema
32
16
0
21 Jul 2022
Analyzing Encoded Concepts in Transformer Language Models
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
Firoj Alam
A. Khan
Jia Xu
22
42
0
27 Jun 2022
Discovering Salient Neurons in Deep NLP Models
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
KELM
MILM
21
15
0
27 Jun 2022
Defending Compositionality in Emergent Languages
Michal Auersperger
Pavel Pecina
6
8
0
09 Jun 2022
A computational psycholinguistic evaluation of the syntactic abilities of Galician BERT models at the interface of dependency resolution and training time
Iria de-Dios-Flores
Marcos Garcia
25
2
0
06 Jun 2022
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
244
45
0
24 May 2022
Blackbird's language matrices (BLMs): a new benchmark to investigate disentangled generalisation in neural networks
Paola Merlo
A. An
M. A. Rodriguez
38
9
0
22 May 2022
Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences
Mark Anderson
Jose Camacho-Collados
35
0
0
16 May 2022
Is the Computation of Abstract Sameness Relations Human-Like in Neural Language Models?
Lukas Thoma
Benjamin Roth
30
0
0
12 May 2022
When a sentence does not introduce a discourse entity, Transformer-based models still sometimes refer to it
Sebastian Schuster
Tal Linzen
18
25
0
06 May 2022
Finding patterns in Knowledge Attribution for Transformers
Jeevesh Juneja
Ritu Agarwal
KELM
19
1
0
03 May 2022
Probing for the Usage of Grammatical Number
Karim Lasri
Tiago Pimentel
Alessandro Lenci
Thierry Poibeau
Ryan Cotterell
38
56
0
19 Apr 2022
Multilingual Syntax-aware Language Modeling through Dependency Tree Conversion
Shun Kando
Hiroshi Noji
Yusuke Miyao
24
0
0
19 Apr 2022
Probing for Constituency Structure in Neural Language Models
David Arps
Younes Samih
Laura Kallmeyer
Hassan Sajjad
27
12
0
13 Apr 2022
Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality
Tristan Thrush
Ryan Jiang
Max Bartolo
Amanpreet Singh
Adina Williams
Douwe Kiela
Candace Ross
CoGe
42
404
0
07 Apr 2022
When classifying grammatical role, BERT doesn't care about word order... except when it matters
Isabel Papadimitriou
Richard Futrell
Kyle Mahowald
MILM
30
29
0
11 Mar 2022
Neural reality of argument structure constructions
Bai Li
Zining Zhu
Guillaume Thomas
Frank Rudzicz
Yang Xu
48
27
0
24 Feb 2022
Probing BERT's priors with serial reproduction chains
Takateru Yamakoshi
Thomas Griffiths
Robert D. Hawkins
29
12
0
24 Feb 2022
Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages
Kyle Mahowald
Evgeniia Diachek
E. Gibson
Evelina Fedorenko
Richard Futrell
34
10
0
30 Jan 2022
Systematic Investigation of Strategies Tailored for Low-Resource Settings for Low-Resource Dependency Parsing
Jivnesh Sandhan
Laxmidhar Behera
Pawan Goyal
45
0
0
27 Jan 2022
Previous
1
2
3
4
5
6
Next