A Comprehensive Artificial Intelligence Vulnerability Taxonomy
DOI:
https://doi.org/10.34190/eccws.23.1.2157Keywords:
Artificial Intelligence, Vulnerability, FrameworkAbstract
With the rise of artificial intelligence (AI) systems and machine learning (ML), there is a need for a comprehensive vulnerability framework that takes into account the specifics of AI systems. A review of the currently available frameworks shows that even though there have been some efforts to create AI specific frameworks, the end results have been flawed. Previous work analysed for this paper include AVID, Mitre ATLAS, Google Secure AI Framework, Attacking Artificial Intelligence, OWASP AI security and privacy guide, and ENISA Multilayer framework for good cybersecurity practices in AI. While only AVID is intended to be an AI/ML focused vulnerability framework, it has some weaknesses that are discussed further in the paper. Of the other works especially the ENISA framework has a valuable way of determining AI domains that can be affected by vulnerabilities. In our taxonomy proposal the first part of the evaluation process is determining the location in the AI system lifecycle that the vulnerability affects. The second part is determining which attributes of technical AI trustworthiness are compromised by the vulnerability. The third part is determining the possible impact of the vulnerability being exploited on a seven-step scale from the AI system functioning correctly, to it performing unintended, attacker directed actions outside the bounds it is supposed to function in. We also evaluate two known AI vulnerabilities based on our taxonomy proposal to showcase the benefits in comparison to existing frameworks.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 European Conference on Cyber Warfare and Security
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.