“Should everyone have access to AI? " Perspectives on Ownership of AI tools for Security

Authors

  • Yasmine Ezzeddine Sheffield Hallam University
  • Petra Saskia Bayerl Sheffield Hallam University

DOI:

https://doi.org/10.34190/icair.4.1.3029

Keywords:

Artificial Intelligence, Ownership, Citizens, Law Enforcement Agencies, Police

Abstract

Given the widespread concerns about the integration of Artificial Intelligence (AI) tools into security and law enforcement, it is natural for digital governance to strive for greater inclusivity in both practice and design (Chohan and Hu, 2020). This inclusivity can manifest in several ways, such as advocating for legal frameworks and algorithmic governance (Schuilenburg and Peeters, 2020), allowing individuals choice, and addressing unintended consequences in extensive data management (Peeters and Widlak, 2018). An under-reflected aspect is the question of ownership, i.e., who should be able to possess and deploy AI tools for law enforcement purposes. Our interview findings from 111 participants across seven countries identified five citizens viewpoints with respect to AI ownership of security-related AI: (1) Police and police-governed agencies; (2) Citizens who disassociate themselves; (3) Entities other than the police; (4) All citizens including themselves; and (5) No one or Unsure. The five clusters represent disparate perspectives on who should be responsible for AI technologies, as well as related concerns about data ownership and expertise, and thus link into broader discussions on responsibility for security, i.e., what deserves protection, how and by whom. The findings contribute theoretically to digitalization, smart technology, social inclusion, and security studies. Additionally, it seeks to influence policy by advocating for AI development that addresses citizen concerns, thereby mitigating risks, social, and ethical implications associated with AI. Crucially, it aims to highlight citizens’ concerns around the potential for malicious actors to exploit ownership of such powerful technology for harmful purposes.

Author Biographies

Yasmine Ezzeddine, Sheffield Hallam University

Yasmine Ezzeddine* is a Lecturer in Policing at Staffordshire University and Associate Lecturer at Sheffield Hallam University. A PhD candidate, she researches resistance to surveillance and AI in law enforcement. Her interests include ethical AI, criminal intelligence, and forensic sciences.

Petra Saskia Bayerl, Sheffield Hallam University

Petra Saskia Bayerl is a Professor of Digital Communication and Security at Sheffield Hallam University. Her research focuses on human-computer interaction, organizational communication, and privacy. She holds multiple master’s degrees and a PhD in Industrial Design Engineering from TU Delft.

Downloads

Published

2024-12-04