Large language models (LLMs) have shown impressive performance in \emph{code} understanding and generation, making coding tasks a key focus for researchers due to their practical applications and value as a testbed for LLM evaluation. Data synthesis and filtering techniques have been widely adopted and shown to be highly effective in this context. In this paper, we present a focused survey and taxonomy of these techniques, emphasizing recent advancements. We highlight key challenges, explore future research directions, and offer practical guidance for new researchers entering the field.
View on arXiv@article{chen2025_2411.00005, title={ Mastering the Craft of Data Synthesis for CodeLLMs }, author={ Meng Chen and Philip Arthur and Qianyu Feng and Cong Duy Vu Hoang and Yu-Heng Hong and Mahdi Kazemi Moghaddam and Omid Nezami and Thien Nguyen and Gioacchino Tangari and Duy Vu and Thanh Vu and Mark Johnson and Krishnaram Kenthapadi and Don Dharmasiri and Long Duong and Yuan-Fang Li }, journal={arXiv preprint arXiv:2411.00005}, year={ 2025 } }