Can LLMs Assist Computer Education? an Empirical Case Study of DeepSeek

This study presents an empirical case study to assess the efficacy and reliability of DeepSeek-V3, an emerging large language model, within the context of computer education. The evaluation employs both CCNA simulation questions and real-world inquiries concerning computer network security posed by Chinese network engineers. To ensure a thorough evaluation, diverse dimensions are considered, encompassing role dependency, cross-linguistic proficiency, and answer reproducibility, accompanied by statistical analysis. The findings demonstrate that the model performs consistently, regardless of whether prompts include a role definition or not. In addition, its adaptability across languages is confirmed by maintaining stable accuracy in both original and translated datasets. A distinct contrast emerges between its performance on lower-order factual recall tasks and higher-order reasoning exercises, which underscores its strengths in retrieving information and its limitations in complex analytical tasks. Although DeepSeek-V3 offers considerable practical value for network security education, challenges remain in its capability to process multimodal data and address highly intricate topics. These results provide valuable insights for future refinement of large language models in specialized professional environments.
View on arXiv@article{xiao2025_2504.00421, title={ Can LLMs Assist Computer Education? an Empirical Case Study of DeepSeek }, author={ Dongfu Xiao and Chen Gao and Zhengquan Luo and Chi Liu and Sheng Shen }, journal={arXiv preprint arXiv:2504.00421}, year={ 2025 } }