Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.04556
Cited By
How Does Code Pretraining Affect Language Model Task Performance?
6 September 2024
Jackson Petty
Sjoerd van Steenkiste
Tal Linzen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Does Code Pretraining Affect Language Model Task Performance?"
7 / 7 papers shown
Title
Trillion 7B Technical Report
Sungjun Han
Juyoung Suk
Suyeong An
Hyungguk Kim
Kyuseok Kim
Wonsuk Yang
Seungtaek Choi
Jamin Shin
12
0
0
21 Apr 2025
Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining
Rosie Zhao
Alexandru Meterez
Sham Kakade
C. Pehlevan
Samy Jelassi
Eran Malach
ReLM
LRM
20
2
0
10 Apr 2025
Rethinking Multilingual Continual Pretraining: Data Mixing for Adapting LLMs Across Languages and Resources
Zihao Li
Shaoxiong Ji
Hengyu Luo
Jörg Tiedemann
CLL
23
0
0
05 Apr 2025
Not-Just-Scaling Laws: Towards a Better Understanding of the Downstream Impact of Language Model Design Decisions
E. Liu
Amanda Bertsch
Lintang Sutawika
Lindia Tjuatja
Patrick Fernandes
...
S.
Carolin (Haas) Lawrence
Aditi Raghunathan
Kiril Gashteovski
Graham Neubig
45
0
0
05 Mar 2025
General Reasoning Requires Learning to Reason from the Get-go
Seungwook Han
Jyothish Pari
Samuel J. Gershman
Pulkit Agrawal
LRM
56
0
0
26 Feb 2025
IPO: Your Language Model is Secretly a Preference Classifier
Shivank Garg
Ayush Singh
Shweta Singh
Paras Chopra
33
1
0
22 Feb 2025
Uncovering Autoregressive LLM Knowledge of Thematic Fit in Event Representation
Safeyah Khaled Alshemali
Daniel Bauer
Yuval Marton
BDL
16
0
0
19 Oct 2024
1