The next generation of AI tools are coming, do we need to better regulate them?

Authors

  • Michael Aubrey University of South Wales

DOI:

https://doi.org/10.34190/icair.4.1.3033

Keywords:

Artificial Intelligence, Cambridge Analytica, Disinformation, Governance, Regulation, safety

Abstract

This research investigates the importance for robust regulation and legislation to properly govern the development of ‘Frontier Artificial Intelligence Systems’.  To do this a comparison of approaches of both existing and proposed legislation has been taken which includes from the US, UK and EU. This has involved reading both summations, assessments and opinions from both academic writers and the media and making comparisons with other industry regulations issues such as Social Media.  The key findings of this research highlight the different approaches being taken to the same problem. Legislation that came into force on August the 1st 2024 from the EU takes a safety-first approach, identifying risk levels from ‘unacceptable risk’ that would be prohibited, down to ‘minimal risk’ which would remain unregulated. This is compared to the UK White Paper, which advocated an innovation first approach with a secondary focus on safety.  With the potential risks associated with AI enhanced cyber-attacks and the spread of disinformation across different platforms, this research emphasises the importance of strong regulation in relation to safety and ensuring this happens from the outset in the development of ‘Frontier AI’ and is not an afterthought.

Downloads

Published

2024-12-04