zsLLMCode: An Effective Approach for Code Embedding via LLM with Zero-Shot Learning

The advent of large language models (LLMs) has greatly advanced artificial intelligence (AI) in software engineering (SE), with code embeddings playing a critical role in tasks like code-clone detection and code clustering. However, existing methods for code embedding, including those based on LLMs, often depend on costly supervised training or fine-tuning for domain adaptation. This paper proposes a novel zero-shot approach, zsLLMCode, to generate code embeddings by using LLMs and sentence embedding models. This approach attempts to eliminate the need for task-specific training or fine-tuning, and to effectively address the issue of erroneous information commonly found in LLM-generated outputs. We conducted a series of experiments to evaluate the performance of the proposed approach by considering various LLMs and embedding models. The results have demonstrated the effectiveness and superiority of our method zsLLMCode over state-of-the-art unsupervised approaches such as SourcererCC, Code2vec, InferCode, and TransformCode. Our findings highlight the potential of zsLLMCode to advance the field of SE by providing robust and efficient solutions for code embedding tasks.
View on arXiv@article{xian2025_2409.14644, title={ zsLLMCode: An Effective Approach for Code Embedding via LLM with Zero-Shot Learning }, author={ Zixiang Xian and Chenhui Cui and Rubing Huang and Chunrong Fang and Zhenyu Chen }, journal={arXiv preprint arXiv:2409.14644}, year={ 2025 } }