ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.11138
  4. Cited By
Colorless green recurrent networks dream hierarchically

Colorless green recurrent networks dream hierarchically

29 March 2018
Kristina Gulordava
Piotr Bojanowski
Edouard Grave
Tal Linzen
Marco Baroni
ArXivPDFHTML

Papers citing "Colorless green recurrent networks dream hierarchically"

50 / 285 papers shown
Title
A Latent-Variable Model for Intrinsic Probing
A Latent-Variable Model for Intrinsic Probing
Karolina Stañczak
Lucas Torroba Hennigen
Adina Williams
Ryan Cotterell
Isabelle Augenstein
29
4
0
20 Jan 2022
Interpreting Arabic Transformer Models
Ahmed Abdelali
Nadir Durrani
Fahim Dalvi
Hassan Sajjad
43
2
0
19 Jan 2022
Towards more patient friendly clinical notes through language models and
  ontologies
Towards more patient friendly clinical notes through language models and ontologies
Francesco Moramarco
Damir Juric
Aleksandar Savkov
Jack Flann
Maria Lehl
...
Tessa Grafen
V. Zhelezniak
Sunir Gohil
Alex Papadopoulos Korfiatis
Nils Y. Hammerla
26
7
0
23 Dec 2021
Sparse Interventions in Language Models with Differentiable Masking
Sparse Interventions in Language Models with Differentiable Masking
Nicola De Cao
Leon Schmid
Dieuwke Hupkes
Ivan Titov
40
27
0
13 Dec 2021
To Augment or Not to Augment? A Comparative Study on Text Augmentation
  Techniques for Low-Resource NLP
To Augment or Not to Augment? A Comparative Study on Text Augmentation Techniques for Low-Resource NLP
Gözde Gül Sahin
42
33
0
18 Nov 2021
How much do language models copy from their training data? Evaluating
  linguistic novelty in text generation using RAVEN
How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
R. Thomas McCoy
P. Smolensky
Tal Linzen
Jianfeng Gao
Asli Celikyilmaz
SyDa
25
119
0
18 Nov 2021
Variation and generality in encoding of syntactic anomaly information in
  sentence embeddings
Variation and generality in encoding of syntactic anomaly information in sentence embeddings
Qinxuan Wu
Allyson Ettinger
23
2
0
12 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MA
VLM
AI4CE
83
1,039
0
01 Nov 2021
Schrödinger's Tree -- On Syntax and Neural Language Models
Schrödinger's Tree -- On Syntax and Neural Language Models
Artur Kulmizev
Joakim Nivre
35
6
0
17 Oct 2021
Causal Transformers Perform Below Chance on Recursive Nested
  Constructions, Unlike Humans
Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans
Yair Lakretz
T. Desbordes
Dieuwke Hupkes
S. Dehaene
233
11
0
14 Oct 2021
Word Acquisition in Neural Language Models
Word Acquisition in Neural Language Models
Tyler A. Chang
Benjamin Bergen
42
40
0
05 Oct 2021
Structural Persistence in Language Models: Priming as a Window into
  Abstract Language Representations
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
61
38
0
30 Sep 2021
Sorting through the noise: Testing robustness of information processing
  in pre-trained language models
Sorting through the noise: Testing robustness of information processing in pre-trained language models
Lalchand Pandia
Allyson Ettinger
49
37
0
25 Sep 2021
Monolingual and Cross-Lingual Acceptability Judgments with the Italian
  CoLA corpus
Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus
Daniela Trotta
R. Guarasci
Elisa Leonardelli
Sara Tonelli
47
30
0
24 Sep 2021
Transformers Generalize Linearly
Transformers Generalize Linearly
Jackson Petty
Robert Frank
AI4CE
216
16
0
24 Sep 2021
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese
  Language Models
Controlled Evaluation of Grammatical Knowledge in Mandarin Chinese Language Models
Yiwen Wang
Jennifer Hu
R. Levy
Peng Qian
26
3
0
22 Sep 2021
Are Transformers a Modern Version of ELIZA? Observations on French
  Object Verb Agreement
Are Transformers a Modern Version of ELIZA? Observations on French Object Verb Agreement
Bingzhi Li
Guillaume Wisniewski
Benoît Crabbé
63
6
0
21 Sep 2021
Data Augmentation Methods for Anaphoric Zero Pronouns
Data Augmentation Methods for Anaphoric Zero Pronouns
Abdulrahman Aloraini
Massimo Poesio
29
5
0
20 Sep 2021
The Language Model Understood the Prompt was Ambiguous: Probing
  Syntactic Uncertainty Through Generation
The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
Laura Aina
Tal Linzen
UQLM
21
18
0
16 Sep 2021
On the Limits of Minimal Pairs in Contrastive Evaluation
On the Limits of Minimal Pairs in Contrastive Evaluation
Jannis Vamvas
Rico Sennrich
52
16
0
15 Sep 2021
Frequency Effects on Syntactic Rule Learning in Transformers
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
88
63
0
14 Sep 2021
Modeling Human Sentence Processing with Left-Corner Recurrent Neural
  Network Grammars
Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Ryo Yoshida
Hiroshi Noji
Yohei Oseki
34
8
0
10 Sep 2021
Transformers in the loop: Polarity in neural models of language
Transformers in the loop: Polarity in neural models of language
Lisa Bylinina
Alexey Tikhonov
38
0
0
08 Sep 2021
How much pretraining data do language models need to learn syntax?
How much pretraining data do language models need to learn syntax?
Laura Pérez-Mayos
Miguel Ballesteros
Leo Wanner
14
32
0
07 Sep 2021
So Cloze yet so Far: N400 Amplitude is Better Predicted by
  Distributional Information than Human Predictability Judgements
So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
J. Michaelov
S. Coulson
Benjamin Bergen
24
43
0
02 Sep 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
40
82
0
30 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
30
13
0
29 Jul 2021
On the Difficulty of Translating Free-Order Case-Marking Languages
On the Difficulty of Translating Free-Order Case-Marking Languages
Arianna Bisazza
Ahmet Üstün
Stephan Sportel
36
9
0
13 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
49
12
0
01 Jul 2021
Information Retrieval for ZeroSpeech 2021: The Submission by University
  of Wroclaw
Information Retrieval for ZeroSpeech 2021: The Submission by University of Wroclaw
J. Chorowski
Grzegorz Ciesielski
Jaroslaw Dzikowski
Adrian Lañcucki
R. Marxer
Mateusz Opala
P. Pusz
Paweł Rychlikowski
Michal Stypulkowski
38
12
0
22 Jun 2021
On the proper role of linguistically-oriented deep net analysis in
  linguistic theorizing
On the proper role of linguistically-oriented deep net analysis in linguistic theorizing
Marco Baroni
21
51
0
16 Jun 2021
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language
  Models
Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Matthew Finlayson
Aaron Mueller
Sebastian Gehrmann
Stuart M. Shieber
Tal Linzen
Yonatan Belinkov
43
105
0
10 Jun 2021
Relative Importance in Sentence Processing
Relative Importance in Sentence Processing
Nora Hollenstein
Lisa Beinborn
FAtt
19
30
0
07 Jun 2021
A Targeted Assessment of Incremental Processing in Neural LanguageModels
  and Humans
A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans
Ethan Gotlieb Wilcox
P. Vani
R. Levy
29
34
0
06 Jun 2021
Do Grammatical Error Correction Models Realize Grammatical
  Generalization?
Do Grammatical Error Correction Models Realize Grammatical Generalization?
Masato Mita
Hitomi Yanaka
18
13
0
06 Jun 2021
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Rowan Hall Maudslay
Ryan Cotterell
31
33
0
04 Jun 2021
Uncovering Constraint-Based Behavior in Neural Models via Targeted
  Fine-Tuning
Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning
Forrest Davis
Marten van Schijndel
AI4CE
17
7
0
02 Jun 2021
SyGNS: A Systematic Generalization Testbed Based on Natural Language
  Semantics
SyGNS: A Systematic Generalization Testbed Based on Natural Language Semantics
Hitomi Yanaka
K. Mineshima
Kentaro Inui
NAI
AI4CE
38
11
0
02 Jun 2021
Language Model Evaluation Beyond Perplexity
Language Model Evaluation Beyond Perplexity
Clara Meister
Ryan Cotterell
30
73
0
31 May 2021
Effective Batching for Recurrent Neural Network Grammars
Effective Batching for Recurrent Neural Network Grammars
Hiroshi Noji
Yohei Oseki
GNN
21
16
0
31 May 2021
Language Models Use Monotonicity to Assess NPI Licensing
Language Models Use Monotonicity to Assess NPI Licensing
Jaap Jumelet
Milica Denić
Jakub Szymanik
Dieuwke Hupkes
Shane Steinert-Threlkeld
KELM
21
28
0
28 May 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
33
8
0
17 May 2021
How is BERT surprised? Layerwise detection of linguistic anomalies
How is BERT surprised? Layerwise detection of linguistic anomalies
Bai Li
Zining Zhu
Guillaume Thomas
Yang Xu
Frank Rudzicz
27
31
0
16 May 2021
Counterfactual Interventions Reveal the Causal Effect of Relative Clause
  Representations on Agreement Prediction
Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Shauli Ravfogel
Grusha Prasad
Tal Linzen
Yoav Goldberg
36
57
0
14 May 2021
Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower
  Information Decay
Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay
H. Chien
Javier S. Turek
Nicole M. Beckage
Vy A. Vo
C. Honey
Ted Willke
22
15
0
12 May 2021
Assessing the Syntactic Capabilities of Transformer-based Multilingual
  Language Models
Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
Laura Pérez-Mayos
Alba Táboas García
Simon Mille
Leo Wanner
ELM
LRM
24
8
0
10 May 2021
Understanding by Understanding Not: Modeling Negation in Language Models
Understanding by Understanding Not: Modeling Negation in Language Models
Arian Hosseini
Siva Reddy
Dzmitry Bahdanau
R. Devon Hjelm
Alessandro Sordoni
Rameswar Panda
22
87
0
07 May 2021
A Survey of Data Augmentation Approaches for NLP
A Survey of Data Augmentation Approaches for NLP
Steven Y. Feng
Varun Gangal
Jason W. Wei
Sarath Chandar
Soroush Vosoughi
Teruko Mitamura
Eduard H. Hovy
AIMat
47
801
0
07 May 2021
Attention vs non-attention for a Shapley-based explanation method
Attention vs non-attention for a Shapley-based explanation method
T. Kersten
Hugh Mee Wong
Jaap Jumelet
Dieuwke Hupkes
33
4
0
26 Apr 2021
Refining Targeted Syntactic Evaluation of Language Models
Refining Targeted Syntactic Evaluation of Language Models
Benjamin Newman
Kai-Siang Ang
Julia Gong
John Hewitt
29
43
0
19 Apr 2021
Previous
123456
Next