ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.14855
73
0

Position: A taxonomy for reporting and describing AI security incidents

19 December 2024
L. Bieringer
Kevin Paeth
Andreas Wespi
Kathrin Grosse
Alexandre Alahi
Kathrin Grosse
ArXivPDFHTML
Abstract

As AI usage becomes more ubiquitous, AI incident reporting is both practiced increasingly in industry and mandated by regulatory requirements. At the same time, it is established that AI systems are exploited in practice by a growing number of security threats. Yet, organizations and practitioners lack necessary guidance in describing AI security incidents. In this position paper, we argue that specific taxonomies are required to describe and report security incidents of AI systems. In other words, existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security. To demonstrate our position, we offer an AI security incident taxonomy and highlight relevant properties, such as machine readability and integration with existing frameworks. We have derived this proposal from interviews with experts, aiming for standardized reporting of AI security incidents, which meets the requirements of affected stakeholder groups. We hope that this taxonomy sparks discussions and eventually allows the sharing of AI security incidents across organizations, enabling more secure AI.

View on arXiv
@article{bieringer2025_2412.14855,
  title={ Position: A taxonomy for reporting and describing AI security incidents },
  author={ Lukas Bieringer and Kevin Paeth and Jochen Stängler and Andreas Wespi and Alexandre Alahi and Kathrin Grosse },
  journal={arXiv preprint arXiv:2412.14855},
  year={ 2025 }
}
Comments on this paper