Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.00857
Cited By
Evaluating Transformer's Ability to Learn Mildly Context-Sensitive Languages
2 September 2023
Shunjie Wang
Shane Steinert-Threlkeld
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Evaluating Transformer's Ability to Learn Mildly Context-Sensitive Languages"
7 / 7 papers shown
Title
What Languages are Easy to Language-Model? A Perspective from Learning Probabilistic Regular Languages
Nadav Borenstein
Anej Svete
R. Chan
Josef Valvoda
Franz Nowak
Isabelle Augenstein
Eleanor Chodroff
Ryan Cotterell
40
11
0
06 Jun 2024
Models of symbol emergence in communication: a conceptual review and a guide for avoiding local minima
Julian Zubek
Tomasz Korbak
J. Rączaszek-Leonardi
17
2
0
08 Mar 2023
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
S. Bhattamishra
Arkil Patel
Varun Kanade
Phil Blunsom
14
43
0
22 Nov 2022
A Logic for Expressing Log-Precision Transformers
William Merrill
Ashish Sabharwal
ReLM
NAI
LRM
48
46
0
06 Oct 2022
Neural Networks and the Chomsky Hierarchy
Grégoire Delétang
Anian Ruoss
Jordi Grau-Moya
Tim Genewein
L. Wenliang
...
Chris Cundy
Marcus Hutter
Shane Legg
Joel Veness
Pedro A. Ortega
UQCV
94
129
0
05 Jul 2022
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Tal Linzen
216
188
0
03 May 2020
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
214
7,915
0
17 Aug 2015
1