ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.15842
  4. Cited By
Knowledge Distillation of LLM for Automatic Scoring of Science Education
  Assessments

Knowledge Distillation of LLM for Automatic Scoring of Science Education Assessments

26 December 2023
Ehsan Latif
Luyang Fang
Ping Ma
Xiaoming Zhai
ArXivPDFHTML

Papers citing "Knowledge Distillation of LLM for Automatic Scoring of Science Education Assessments"

5 / 5 papers shown
Title
Information Extraction from Clinical Notes: Are We Ready to Switch to Large Language Models?
Information Extraction from Clinical Notes: Are We Ready to Switch to Large Language Models?
Yan Hu
X. Zuo
Yujia Zhou
Xueqing Peng
J. Huang
...
Ruey-Ling Weng
Qingyu Chen
Xiaoqian Jiang
Kirk Roberts
Hua Xu
LM&MA
24
3
0
08 Jan 2025
'Simulacrum of Stories': Examining Large Language Models as Qualitative
  Research Participants
'Simulacrum of Stories': Examining Large Language Models as Qualitative Research Participants
Shivani Kapania
William Agnew
Motahhare Eslami
Hoda Heidari
Sarah E Fox
31
4
0
28 Sep 2024
Can Low-Rank Knowledge Distillation in LLMs be Useful for
  Microelectronic Reasoning?
Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?
N. Rouf
Fin Amin
Paul D. Franzon
26
0
0
19 Jun 2024
PromptMix: A Class Boundary Augmentation Method for Large Language Model
  Distillation
PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation
Gaurav Sahu
Olga Vechtomova
Dzmitry Bahdanau
I. Laradji
VLM
47
24
0
22 Oct 2023
Fine-tuning ChatGPT for Automatic Scoring
Fine-tuning ChatGPT for Automatic Scoring
Ehsan Latif
Xiaoming Zhai
AI4MH
38
86
0
16 Oct 2023
1