Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.15842
Cited By
Knowledge Distillation of LLM for Automatic Scoring of Science Education Assessments
26 December 2023
Ehsan Latif
Luyang Fang
Ping Ma
Xiaoming Zhai
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Knowledge Distillation of LLM for Automatic Scoring of Science Education Assessments"
5 / 5 papers shown
Title
Information Extraction from Clinical Notes: Are We Ready to Switch to Large Language Models?
Yan Hu
X. Zuo
Yujia Zhou
Xueqing Peng
J. Huang
...
Ruey-Ling Weng
Qingyu Chen
Xiaoqian Jiang
Kirk Roberts
Hua Xu
LM&MA
24
3
0
08 Jan 2025
'Simulacrum of Stories': Examining Large Language Models as Qualitative Research Participants
Shivani Kapania
William Agnew
Motahhare Eslami
Hoda Heidari
Sarah E Fox
31
4
0
28 Sep 2024
Can Low-Rank Knowledge Distillation in LLMs be Useful for Microelectronic Reasoning?
N. Rouf
Fin Amin
Paul D. Franzon
26
0
0
19 Jun 2024
PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation
Gaurav Sahu
Olga Vechtomova
Dzmitry Bahdanau
I. Laradji
VLM
47
24
0
22 Oct 2023
Fine-tuning ChatGPT for Automatic Scoring
Ehsan Latif
Xiaoming Zhai
AI4MH
38
86
0
16 Oct 2023
1