Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.14701
Cited By
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
24 May 2023
R. Thomas McCoy
Thomas L. Griffiths
BDL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Modeling rapid language learning by distilling Bayesian priors into artificial neural networks"
8 / 8 papers shown
Title
On Language Models' Sensitivity to Suspicious Coincidences
Sriram Padmanabhan
Kanishka Misra
Kyle Mahowald
Eunsol Choi
ReLM
LRM
35
0
0
13 Apr 2025
Language Models Trained to do Arithmetic Predict Human Risky and Intertemporal Choice
Jian-Qiao Zhu
Haijiang Yan
Thomas L. Griffiths
77
2
0
29 May 2024
Compositional diversity in visual concept learning
Yanli Zhou
Reuben Feinman
Brenden Lake
CoGe
OCL
16
8
0
30 May 2023
Neural Networks and the Chomsky Hierarchy
Grégoire Delétang
Anian Ruoss
Jordi Grau-Moya
Tim Genewein
L. Wenliang
...
Chris Cundy
Marcus Hutter
Shane Legg
Joel Veness
Pedro A. Ortega
UQCV
94
129
0
05 Jul 2022
A Survey on Hyperdimensional Computing aka Vector Symbolic Architectures, Part I: Models and Data Transformations
Denis Kleyko
D. Rachkovskij
Evgeny Osipov
Abbas Rahimi
62
116
0
11 Nov 2021
Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella J. Sinclair
Jaap Jumelet
Willem H. Zuidema
Raquel Fernández
42
37
0
30 Sep 2021
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Tal Linzen
210
188
0
03 May 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
237
11,568
0
09 Mar 2017
1