9
0

Jailbreak Detection in Clinical Training LLMs Using Feature-Based Predictive Models

Abstract

Jailbreaking in Large Language Models (LLMs) threatens their safe use in sensitive domains like education by allowing users to bypass ethical safeguards. This study focuses on detecting jailbreaks in 2-Sigma, a clinical education platform that simulates patient interactions using LLMs. We annotated over 2,300 prompts across 158 conversations using four linguistic variables shown to correlate strongly with jailbreak behavior. The extracted features were used to train several predictive models, including Decision Trees, Fuzzy Logic-based classifiers, Boosting methods, and Logistic Regression. Results show that feature-based predictive models consistently outperformed Prompt Engineering, with the Fuzzy Decision Tree achieving the best overall performance. Our findings demonstrate that linguistic-feature-based models are effective and explainable alternatives for jailbreak detection. We suggest future work explore hybrid frameworks that integrate prompt-based flexibility with rule-based robustness for real-time, spectrum-based jailbreak monitoring in educational LLMs.

View on arXiv
@article{nguyen2025_2505.00010,
  title={ Jailbreak Detection in Clinical Training LLMs Using Feature-Based Predictive Models },
  author={ Tri Nguyen and Lohith Srikanth Pentapalli and Magnus Sieverding and Laurah Turner and Seth Overla and Weibing Zheng and Chris Zhou and David Furniss and Danielle Weber and Michael Gharib and Matt Kelleher and Michael Shukis and Cameron Pawlik and Kelly Cohen },
  journal={arXiv preprint arXiv:2505.00010},
  year={ 2025 }
}
Comments on this paper