Grammar-Based Code Representation: Is It a Worthy Pursuit for LLMs?
Grammar serves as a cornerstone in programming languages and software engineering, providing frameworks to define the syntactic space and program structure. Existing research demonstrates the effectiveness of grammar-based code representations in small-scale models, showing their ability to reduce syntax errors and enhance performance. However, as language models scale to the billion level or beyond, syntax-level errors become rare, making it unclear whether grammar information still provides performance benefits. To explore this, we develop a series of billion-scale GrammarCoder models, incorporating grammar rules in the code generation process. Experiments on HumanEval (+) and MBPP (+) demonstrate a notable improvement in code generation accuracy. Further analysis shows that grammar-based representations enhance LLMs' ability to discern subtle code differences, reducing semantic errors caused by minor variations. These findings suggest that grammar-based code representations remain valuable even in billion-scale models, not only by maintaining syntax correctness but also by improving semantic differentiation.
View on arXiv@article{liang2025_2503.05507, title={ Grammar-Based Code Representation: Is It a Worthy Pursuit for LLMs? }, author={ Qingyuan Liang and Zhao Zhang and Zeyu Sun and Zheng Lin and Qi Luo and Yueyi Xiao and Yizhou Chen and Yuqun Zhang and Haotian Zhang and Lu Zhang and Bin Chen and Yingfei Xiong }, journal={arXiv preprint arXiv:2503.05507}, year={ 2025 } }