A Taxonomy of Omnicidal Futures Involving Artificial Intelligence
Andrew Critch
Jacob Tsimerman
Main:9 Pages
1 Figures
Bibliography:1 Pages
Abstract
This report presents a taxonomy and examples of potential omnicidal events resulting from AI: scenarios where all or almost all humans are killed. These events are not presented as inevitable, but as possibilities that we can work to avoid. Insofar as large institutions require a degree of public support in order to take certain actions, we hope that by presenting these possibilities in public, we can help to support preventive measures against catastrophic risks from AI.
View on arXivComments on this paper
