Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.11741
Cited By
Transcendence: Generative Models Can Outperform The Experts That Train Them
17 June 2024
Edwin Zhang
Vincent Zhu
Naomi Saphra
Anat Kleiman
Benjamin L. Edelman
Milind Tambe
Sham Kakade
Eran Malach
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transcendence: Generative Models Can Outperform The Experts That Train Them"
8 / 8 papers shown
Title
Data-Efficient Multi-Agent Spatial Planning with LLMs
Huangyuan Su
Aaron Walsman
Daniel Garces
Sham Kakade
Stephanie Gil
LLMAG
Presented at
ResearchTrend Connect | LLMAG
on
28 Mar 2025
136
0
0
26 Feb 2025
Self-Improving Transformers Overcome Easy-to-Hard and Length Generalization Challenges
Nayoung Lee
Ziyang Cai
Avi Schwarzschild
Kangwook Lee
Dimitris Papailiopoulos
ReLM
VLM
LRM
AI4CE
75
4
0
03 Feb 2025
Provable Weak-to-Strong Generalization via Benign Overfitting
David X. Wu
A. Sahai
65
6
0
06 Oct 2024
EnsemW2S: Can an Ensemble of LLMs be Leveraged to Obtain a Stronger LLM?
Aakriti Agrawal
Mucong Ding
Zora Che
Chenghao Deng
Anirudh Satheesh
John Langford
Furong Huang
45
4
0
06 Oct 2024
Human-aligned Chess with a Bit of Search
Yiming Zhang
Athul Paul Jacob
Vivian Lai
Daniel Fried
Daphne Ippolito
26
1
0
04 Oct 2024
AI Safety in Generative AI Large Language Models: A Survey
Jaymari Chua
Yun Yvonna Li
Shiyi Yang
Chen Wang
Lina Yao
LM&MA
36
12
0
06 Jul 2024
Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models
Adam Karvonen
27
19
0
21 Mar 2024
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
334
1,951
0
04 May 2020
1