14
0

An Approach to Technical AGI Safety and Security

Abstract

Artificial General Intelligence (AGI) promises transformative benefits but also presents significant risks. We develop an approach to address the risk of harms consequential enough to significantly harm humanity. We identify four areas of risk: misuse, misalignment, mistakes, and structural risks. Of these, we focus on technical approaches to misuse and misalignment. For misuse, our strategy aims to prevent threat actors from accessing dangerous capabilities, by proactively identifying dangerous capabilities, and implementing robust security, access restrictions, monitoring, and model safety mitigations. To address misalignment, we outline two lines of defense. First, model-level mitigations such as amplified oversight and robust training can help to build an aligned model. Second, system-level security measures such as monitoring and access control can mitigate harm even if the model is misaligned. Techniques from interpretability, uncertainty estimation, and safer design patterns can enhance the effectiveness of these mitigations. Finally, we briefly outline how these ingredients could be combined to produce safety cases for AGI systems.

View on arXiv
@article{shah2025_2504.01849,
  title={ An Approach to Technical AGI Safety and Security },
  author={ Rohin Shah and Alex Irpan and Alexander Matt Turner and Anna Wang and Arthur Conmy and David Lindner and Jonah Brown-Cohen and Lewis Ho and Neel Nanda and Raluca Ada Popa and Rishub Jain and Rory Greig and Samuel Albanie and Scott Emmons and Sebastian Farquhar and Sébastien Krier and Senthooran Rajamanoharan and Sophie Bridgers and Tobi Ijitoye and Tom Everitt and Victoria Krakovna and Vikrant Varma and Vladimir Mikulik and Zachary Kenton and Dave Orr and Shane Legg and Noah Goodman and Allan Dafoe and Four Flynn and Anca Dragan },
  journal={arXiv preprint arXiv:2504.01849},
  year={ 2025 }
}
Comments on this paper