Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.15189
Cited By
Learning by Distilling Context
30 September 2022
Charles Burton Snell
Dan Klein
Ruiqi Zhong
ReLM
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning by Distilling Context"
5 / 5 papers shown
Title
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
276
5,177
0
28 Jan 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
192
1,436
0
15 Oct 2021
Meta-learning via Language Model In-context Tuning
Yanda Chen
Ruiqi Zhong
Sheng Zha
George Karypis
He He
189
125
0
15 Oct 2021
Towards Zero-Label Language Learning
Zirui Wang
Adams Wei Yu
Orhan Firat
Yuan Cao
SyDa
157
93
0
19 Sep 2021
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
215
3,054
0
23 Jan 2020
1