Organisations are rapidly adopting artificial intelligence (AI) tools to perform tasks previously undertaken by people. The potential benefits are enormous. Separately, some organisations deploy personnel security measures to mitigate the security risks arising from trusted human insiders. Unfortunately, there is no meaningful interplay between the rapidly evolving domain of AI and the traditional world of personnel security. This is a problem. The complex risks from human insiders are hard enough to understand and manage, despite many decades of effort. The emerging security risks from AI insiders are even more opaque. Both sides need all the help they can get. Some of the concepts and approaches that have proved useful in dealing with human insiders are also applicable to the emerging risks from AI insiders.
View on arXiv@article{martin2025_2504.00012, title={ I'm Sorry Dave: How the old world of personnel security can inform the new world of AI insider risk }, author={ Paul Martin and Sarah Mercer }, journal={arXiv preprint arXiv:2504.00012}, year={ 2025 } }