ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.12504
13
2

Case2Code: Scalable Synthetic Data for Code Generation

17 July 2024
Yunfan Shao
Linyang Li
Yichuan Ma
Peiji Li
Demin Song
Qinyuan Cheng
Shimin Li
Xiaonan Li
Pengyu Wang
Qipeng Guo
Hang Yan
Xipeng Qiu
Xuanjing Huang
Dahua Lin
    LRM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have shown outstanding breakthroughs in code generation. Recent work improves code LLMs by training on synthetic data generated by some powerful LLMs, which can be challenging to scale due to the dependence on a teacher model and high generation costs. In this paper, we focus on synthesizing code data at scale and propose a \textbf{Case2Code} task by exploiting the expressiveness and correctness of programs. \textbf{Case2Code} is an inductive inference task that aims to infer underlying code implementations by observing input-output examples or program behaviors, By incorporating LLMs to generate program inputs, and executing the program with these inputs to obtain the program outputs, we can synthesize diverse and high-quality \textbf{Case2Code} data at scale for training and evaluating code LLMs. Experimental results show that case-to-code induction is challenging for current representative LLMs if they are untrained. Models trained with \textbf{Case2Code} improve performance not only on distribution case-to-code induction but also on various coding-generation tasks, demonstrating the great potential of large-scale synthetic data and inductive learning.

View on arXiv
@article{shao2025_2407.12504,
  title={ Case2Code: Scalable Synthetic Data for Code Generation },
  author={ Yunfan Shao and Linyang Li and Yichuan Ma and Peiji Li and Demin Song and Qinyuan Cheng and Shimin Li and Xiaonan Li and Pengyu Wang and Qipeng Guo and Hang Yan and Xipeng Qiu and Xuanjing Huang and Dahua Lin },
  journal={arXiv preprint arXiv:2407.12504},
  year={ 2025 }
}
Comments on this paper