675
v1v2v3 (latest)

Mastering the Craft of Data Synthesis for CodeLLMs

North American Chapter of the Association for Computational Linguistics (NAACL), 2024
Main:9 Pages
3 Figures
Bibliography:7 Pages
Appendix:1 Pages
Abstract

Large language models (LLMs) have shown impressive performance in \emph{code} understanding and generation, making coding tasks a key focus for researchers due to their practical applications and value as a testbed for LLM evaluation. Data synthesis and filtering techniques have been widely adopted and shown to be highly effective in this context. In this paper, we present a focused survey and taxonomy of these techniques, emphasizing recent advancements. We highlight key challenges, explore future research directions, and offer practical guidance for new researchers entering the field.

View on arXiv
Comments on this paper