Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.19905
Cited By
How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases
31 May 2023
Aaron Mueller
Tal Linzen
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases"
6 / 6 papers shown
Title
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
38
107
0
10 Apr 2025
BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency
Akari Haga
Akiyo Fukatsu
Miyu Oba
Arianna Bisazza
Yohei Oseki
35
1
0
14 Nov 2024
Filtered Corpus Training (FiCT) Shows that Language Models can Generalize from Indirect Evidence
Abhinav Patil
Jaap Jumelet
Yu Ying Chiu
Andy Lapastora
Peter Shen
Lexie Wang
Clevis Willrich
Shane Steinert-Threlkeld
27
13
0
24 May 2024
Transformers Generalize Linearly
Jackson Petty
Robert Frank
AI4CE
213
16
0
24 Sep 2021
Frequency Effects on Syntactic Rule Learning in Transformers
Jason W. Wei
Dan Garrette
Tal Linzen
Ellie Pavlick
82
62
0
14 Sep 2021
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Tal Linzen
218
188
0
03 May 2020
1