52
0

Improve LLM-based Automatic Essay Scoring with Linguistic Features

Abstract

Automatic Essay Scoring (AES) assigns scores to student essays, reducing the grading workload for instructors. Developing a scoring system capable of handling essays across diverse prompts is challenging due to the flexibility and diverse nature of the writing task. Existing methods typically fall into two categories: supervised feature-based approaches and large language model (LLM)-based methods. Supervised feature-based approaches often achieve higher performance but require resource-intensive training. In contrast, LLM-based methods are computationally efficient during inference but tend to suffer from lower performance. This paper combines these approaches by incorporating linguistic features into LLM-based scoring. Experimental results show that this hybrid method outperforms baseline models for both in-domain and out-of-domain writing prompts.

View on arXiv
@article{hou2025_2502.09497,
  title={ Improve LLM-based Automatic Essay Scoring with Linguistic Features },
  author={ Zhaoyi Joey Hou and Alejandro Ciuba and Xiang Lorraine Li },
  journal={arXiv preprint arXiv:2502.09497},
  year={ 2025 }
}
Comments on this paper