Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1711.06301
Cited By
v1
v2 (latest)
One Model for the Learning of Language
16 November 2017
Yu’an Yang
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"One Model for the Learning of Language"
18 / 18 papers shown
Title
Are Large Language Models Reliable AI Scientists? Assessing Reverse-Engineering of Black-Box Systems
Jiayi Geng
Howard Chen
Dilip Arumugam
Thomas L. Griffiths
96
0
0
23 May 2025
Meta-Learning Neural Mechanisms rather than Bayesian Priors
Michael Goodale
Salvador Mascarenhas
Yair Lakretz
150
1
0
20 Mar 2025
Relational decomposition for program synthesis
Céline Hocquette
Andrew Cropper
97
4
0
22 Aug 2024
No Such Thing as a General Learner: Language models and their dual optimization
Emmanuel Chemla
R. Nefdt
69
0
0
18 Aug 2024
Building Machines that Learn and Think with People
Katherine M. Collins
Ilia Sucholutsky
Umang Bhatt
Kartik Chandra
Lionel Wong
...
Mark K. Ho
Vikash K. Mansinghka
Adrian Weller
Joshua B. Tenenbaum
Thomas Griffiths
133
38
0
22 Jul 2024
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
Jacob Russin
Sam Whitman McGrath
Danielle J. Williams
Lotem Elber-Dorozko
AI4CE
184
4
0
24 May 2024
Program-Based Strategy Induction for Reinforcement Learning
Carlos G. Correa
Thomas Griffiths
Nathaniel D. Daw
69
1
0
26 Feb 2024
Opening the black box of language acquisition
Jérome Michaud
Anna Jon-And
36
0
0
18 Feb 2024
Distilling Symbolic Priors for Concept Learning into Neural Networks
Ioana Marinescu
R. Thomas McCoy
Thomas Griffiths
75
2
0
10 Feb 2024
Bayes in the age of intelligent machines
Thomas Griffiths
Jian-Qiao Zhu
Erin Grant
R. Thomas McCoy
AI4CE
104
19
0
16 Nov 2023
In-Context Learning Dynamics with Random Binary Sequences
Eric J. Bigelow
Ekdeep Singh Lubana
Robert P. Dick
Hidenori Tanaka
T. Ullman
92
4
0
26 Oct 2023
Humans and language models diverge when predicting repeating text
Aditya R. Vaidya
Javier S. Turek
Alexander G. Huth
69
6
0
10 Oct 2023
The Relational Bottleneck as an Inductive Bias for Efficient Abstraction
Taylor Webb
Steven M. Frankland
Awni Altabaa
Simon N. Segert
Kamesh Krishnamurthy
...
Tyler Giallanza
Zack Dulberg
Randall O'Reilly
John Lafferty
Jonathan D. Cohen
85
31
0
12 Sep 2023
A Critical Review of Large Language Models: Sensitivity, Bias, and the Path Toward Specialized AI
Arash Hajikhani
Carolyn Cole
ELM
57
16
0
28 Jul 2023
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
R. Thomas McCoy
Thomas Griffiths
BDL
89
17
0
24 May 2023
Meta-Learned Models of Cognition
Marcel Binz
Ishita Dasgupta
Akshay K. Jagadish
M. Botvinick
Jane X. Wang
Eric Schulz
94
27
0
12 Apr 2023
What Artificial Neural Networks Can Tell Us About Human Language Acquisition
Alex Warstadt
Samuel R. Bowman
77
120
0
17 Aug 2022
Minimum Description Length Recurrent Neural Networks
Nur Lan
Michal Geyer
Emmanuel Chemla
Roni Katzir
79
13
0
31 Oct 2021
1