ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.02611
  4. Cited By
Evaluating the Ability of LSTMs to Learn Context-Free Grammars

Evaluating the Ability of LSTMs to Learn Context-Free Grammars

6 November 2018
Luzi Sennhauser
Robert C. Berwick
ArXivPDFHTML

Papers citing "Evaluating the Ability of LSTMs to Learn Context-Free Grammars"

6 / 6 papers shown
Title
Simplicity Bias in Transformers and their Ability to Learn Sparse
  Boolean Functions
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
S. Bhattamishra
Arkil Patel
Varun Kanade
Phil Blunsom
27
46
0
22 Nov 2022
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Assessing the Unitary RNN as an End-to-End Compositional Model of Syntax
Jean-Philippe Bernardy
Shalom Lappin
65
1
0
11 Aug 2022
Neural Networks and the Chomsky Hierarchy
Neural Networks and the Chomsky Hierarchy
Grégoire Delétang
Anian Ruoss
Jordi Grau-Moya
Tim Genewein
L. Wenliang
...
Chris Cundy
Marcus Hutter
Shane Legg
Joel Veness
Pedro A. Ortega
UQCV
109
133
0
05 Jul 2022
Thinking Like Transformers
Thinking Like Transformers
Gail Weiss
Yoav Goldberg
Eran Yahav
AI4CE
35
129
0
13 Jun 2021
On the Computational Power of Transformers and its Implications in
  Sequence Modeling
On the Computational Power of Transformers and its Implications in Sequence Modeling
S. Bhattamishra
Arkil Patel
Navin Goyal
33
66
0
16 Jun 2020
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck
  Languages
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
Mirac Suzgun
Sebastian Gehrmann
Yonatan Belinkov
Stuart M. Shieber
32
50
0
08 Nov 2019
1