Do Large Language Models Align with Core Mental Health Counseling Competencies?

The rapid evolution of Large Language Models (LLMs) presents a promising solution to the global shortage of mental health professionals. However, their alignment with essential counseling competencies remains underexplored. We introduce CounselingBench, a novel NCMHCE-based benchmark evaluating 22 general-purpose and medical-finetuned LLMs across five key competencies. While frontier models surpass minimum aptitude thresholds, they fall short of expert-level performance, excelling in Intake, Assessment & Diagnosis but struggling with Core Counseling Attributes and Professional Practice & Ethics. Surprisingly, medical LLMs do not outperform generalist models in accuracy, though they provide slightly better justifications while making more context-related errors. These findings highlight the challenges of developing AI for mental health counseling, particularly in competencies requiring empathy and nuanced reasoning. Our results underscore the need for specialized, fine-tuned models aligned with core mental health counseling competencies and supported by human oversight before real-world deployment. Code and data associated with this manuscript can be found at:this https URL
View on arXiv@article{nguyen2025_2410.22446, title={ Do Large Language Models Align with Core Mental Health Counseling Competencies? }, author={ Viet Cuong Nguyen and Mohammad Taher and Dongwan Hong and Vinicius Konkolics Possobom and Vibha Thirunellayi Gopalakrishnan and Ekta Raj and Zihang Li and Heather J. Soled and Michael L. Birnbaum and Srijan Kumar and Munmun De Choudhury }, journal={arXiv preprint arXiv:2410.22446}, year={ 2025 } }