Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2308.01684
Cited By
Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models
3 August 2023
Zheyu Zhang
Han Yang
Bolei Ma
David Rügamer
Ercong Nie
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models"
8 / 8 papers shown
Title
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
Aaron Mueller
Leshem Choshen
E. Wilcox
Chengxu Zhuang
...
Rafael Mosquera
Bhargavi Paranjape
Adina Williams
Tal Linzen
Ryan Cotterell
38
106
0
10 Apr 2025
BERTtime Stories: Investigating the Role of Synthetic Story Data in Language Pre-training
Nikitas Theodoropoulos
Giorgos Filandrianos
Vassilis Lyberatos
Maria Lymperaiou
Giorgos Stamou
SyDa
52
1
0
24 Feb 2025
Pre-Training to Learn in Context
Yuxian Gu
Li Dong
Furu Wei
Minlie Huang
CLIP
LRM
ReLM
108
37
0
16 May 2023
Generate rather than Retrieve: Large Language Models are Strong Context Generators
W. Yu
Dan Iter
Shuohang Wang
Yichong Xu
Mingxuan Ju
Soumya Sanyal
Chenguang Zhu
Michael Zeng
Meng-Long Jiang
RALM
AIMat
221
321
0
21 Sep 2022
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
295
4,077
0
24 May 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,448
0
28 Jan 2022
Word Acquisition in Neural Language Models
Tyler A. Chang
Benjamin Bergen
27
39
0
05 Oct 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1