ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22771
15
0

Automated Essay Scoring Incorporating Annotations from Automated Feedback Systems

28 May 2025
Christopher Ormerod
ArXiv (abs)PDFHTML
Main:7 Pages
2 Figures
Bibliography:3 Pages
8 Tables
Abstract

This study illustrates how incorporating feedback-oriented annotations into the scoring pipeline can enhance the accuracy of automated essay scoring (AES). This approach is demonstrated with the Persuasive Essays for Rating, Selecting, and Understanding Argumentative and Discourse Elements (PERSUADE) corpus. We integrate two types of feedback-driven annotations: those that identify spelling and grammatical errors, and those that highlight argumentative components. To illustrate how this method could be applied in real-world scenarios, we employ two LLMs to generate annotations -- a generative language model used for spell-correction and an encoder-based token classifier trained to identify and mark argumentative elements. By incorporating annotations into the scoring process, we demonstrate improvements in performance using encoder-based large language models fine-tuned as classifiers.

View on arXiv
@article{ormerod2025_2505.22771,
  title={ Automated Essay Scoring Incorporating Annotations from Automated Feedback Systems },
  author={ Christopher Ormerod },
  journal={arXiv preprint arXiv:2505.22771},
  year={ 2025 }
}
Comments on this paper