338
v1v2 (latest)

Parameter-Efficient Fine-Tuning of Large Language Models for Unit Test Generation: An Empirical Study

Main:22 Pages
2 Figures
Bibliography:4 Pages
8 Tables
Abstract

Parameter-efficient fine-tuning (PEFT) methods, which fine-tune only a subset of model parameters, offer a promising solution by reducing the computational costs of tuning large language models (LLMs) while maintaining their performance. Existing studies have explored using PEFT and LLMs for various code-related tasks and found that the effectiveness of PEFT techniques is task-dependent. The state-of-the-art is limited to using LLMs with full fine-tuning to generate unit tests. The application of PEFT techniques in unit test generation remains underexplored. This paper investigates both full fine-tuning and various PEFT methods, including LoRA, (IA)^3, and prompt tuning, across thirteen models of different architectures and sizes. We use well-established benchmark datasets to evaluate their effectiveness in unit test generation and measure syntax correctness, CodeBLEU, pass@1, instruction coverage, branch coverage, and mutation score of the generated tests. Our findings show that LoRA can deliver performance comparable to full fine-tuning for unit test generation in several cases. If training costs are valued, prompt tuning is the most cost-effective approach, particularly for large models. However, the models tuned with full fine-tuning or PEFT may generate fewer executable test cases than the baseline model because they generate more tests calling nonexistent methods or having type mismatches. For the generated ones that are executable, the ones from the tuned models show better test coverage than those from the baseline model.

View on arXiv
Comments on this paper