307
v1v2v3v4 (latest)

Generative AI Misuse Potential in Cyber Security Education: A Case Study of a UK Degree Program

Main:17 Pages
1 Figures
Bibliography:3 Pages
6 Tables
Abstract

Recent advances in generative artificial intelligence (AI), such as ChatGPT, Google Gemini, and other large language models (LLMs), pose significant challenges for maintaining academic integrity within higher education. This paper examines the structural susceptibility of a certifiedthis http URL. Cyber Security program at a UK Russell Group university to the misuse of LLMs. Building on and extending a recently proposed quantitative framework for estimating assessment-level exposure, we analyse all summative assessments on the program and derive both module-level and program-level exposure metrics. Our results show that the majority of modules exhibit high exposure to LLM misuse, driven largely by independent project- and report-based assessments, with the capstone dissertation module particularly vulnerable. We introduce a credit-weighted program exposure score and find that the program as a whole falls within a high to very high risk band. We also discuss contextual factors -- such as block teaching and a predominantly international cohort -- that may amplify incentives to misuse LLMs. In response, we outline a set of LLM-resistant assessment strategies, critically assess the limitations of detection-based approaches, and argue for a pedagogy-first approach that preserves academic standards while preparing students for the realities of professional cyber security practice.

View on arXiv
Comments on this paper