ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.10421
  4. Cited By
CodEv: An Automated Grading Framework Leveraging Large Language Models for Consistent and Constructive Feedback

CodEv: An Automated Grading Framework Leveraging Large Language Models for Consistent and Constructive Feedback

BigData Congress [Services Society] (BSS), 2024
10 January 2025
En-Qi Tseng
Pei-Cing Huang
Chan Hsu
Peng-Yi Wu
Chan-Tung Ku
Yihuang Kang
ArXiv (abs)PDFHTML

Papers citing "CodEv: An Automated Grading Framework Leveraging Large Language Models for Consistent and Constructive Feedback"

9 / 9 papers shown
Rubric Is All You Need: Enhancing LLM-based Code Evaluation With Question-Specific Rubrics
Rubric Is All You Need: Enhancing LLM-based Code Evaluation With Question-Specific RubricsInternational Computing Education Research Workshop (ICER), 2025
Aditya Pathak
Rachit Gandhi
Vaibhav Uttam
Devansh
Yashwanth Nakka
...
Aditya Mittal
Aashna Ased
Chirag Khatri
Jagat Sesh Challa
Dhruv Kumar
382
3
0
31 Mar 2025
Gemma 2: Improving Open Language Models at a Practical Size
Gemma 2: Improving Open Language Models at a Practical Size
Gemma Team
Gemma Team Morgane Riviere
Shreya Pathak
Pier Giuseppe Sessa
Cassidy Hardin
...
Noah Fiedel
Armand Joulin
Kathleen Kenealy
Robert Dadashi
Alek Andreev
VLMMoEOSLM
716
1,731
0
31 Jul 2024
Towards Greener LLMs: Bringing Energy-Efficiency to the Forefront of LLM
  Inference
Towards Greener LLMs: Bringing Energy-Efficiency to the Forefront of LLM Inference
Jovan Stojkovic
Esha Choukse
Chaojie Zhang
Inigo Goiri
Josep Torrellas
205
75
0
29 Mar 2024
More Agents Is All You Need
More Agents Is All You Need
Junyou Li
Qin Zhang
Yangbin Yu
Qiang Fu
Deheng Ye
LLMAG
422
135
0
03 Feb 2024
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model SizesAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
800
792
0
03 May 2023
G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment
G-Eval: NLG Evaluation using GPT-4 with Better Human AlignmentConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Yang Liu
Dan Iter
Yichong Xu
Shuohang Wang
Ruochen Xu
Chenguang Zhu
ELMALMLM&MA
676
2,036
0
29 Mar 2023
GPTScore: Evaluate as You Desire
GPTScore: Evaluate as You DesireNorth American Chapter of the Association for Computational Linguistics (NAACL), 2023
Jinlan Fu
See-Kiong Ng
Zhengbao Jiang
Pengfei Liu
LM&MAALMELM
434
432
0
08 Feb 2023
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language ModelsNeural Information Processing Systems (NeurIPS), 2022
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&RoLRMAI4CEReLM
2.6K
16,140
0
28 Jan 2022
Representation Learning: A Review and New Perspectives
Representation Learning: A Review and New PerspectivesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2012
Yoshua Bengio
Aaron Courville
Pascal Vincent
OODSSL
1.1K
13,541
0
24 Jun 2012
1
Page 1 of 1