Unpacking AI Security Considerations





Artificial intelligence, attack, cyber threat


The field of Artificial Intelligence has emerged as a convincing tool to be used in a myriad of applications like finance, traffic prediction, health and travel sectors. Due to the enormous benefits provided in terms of automation, convenience, processing time, reduced manhours, and productivity, AI is being seen as the next technical revolution.  AI is being showcased as a useful tool to stimulate creativity as well as provide support with its tremendous computational power. The release of tools like ChatGPT has exploded onto the technological scene. Users are making use of Large Language Models (LLMs) and tools to perform a host of activities like writing an essay, translating documents, and finding travel plans. However, the popularity of these tools has not been without risk.  In the technology marketplace, the race to dominance can force competitors to waive safety concerns in favour of product adoption. Many are unaware of the potential dangers and risks that may inherently reside within AI tools. This paper looks at the potential risks of AI tools such the creation of misinformation or scams. AI security has now become a paramount concern that should not be ignored. In this paper, the potential risks and threat vectors of Artificial Intelligence will be covered. The aim will be to provide insight into the malicious use of Artificial Intelligence Tools through a discussion of techniques to bypass security controls.  The paper aims to provide a more detailed account on how AI can be manipulated in order to empower users about the latest attack schemes.

Author Biography

Namosha Veerasamy, CSIR

Namosha Veerasamy is a computer scientist in the field cyber security research and governance for over 15 years. She received her PhD from the University of Johannesburg. She is currently employed as a senior researcher at the Council for Scientific and Industrial Research (CSIR) in Pretoria. She also holds CISSP and CISM certifications.