From Hoax to Reality: Deepfake-Driven Misinformation and the Death of Ozzy Osbourne
DOI:
https://doi.org/10.34190/iccws.21.1.4530Keywords:
deepfakes, misinformation, fake news, liar's dividend, digital trust, FOMOAbstract
Deepfakes are AI-generated images, videos, and texts that convincingly mimic real individuals. In recent years, such forgeries have proliferated across social media, and their role in fraud cases has increased markedly. However, detection tools often fail in real-world conditions; open-source detectors typically perform only half as well on "in-the-wild" content compared to curated test sets. This performance gap heightens the risk that fabricated content will undermine public trust and foster a climate of suspicion in which even authentic recordings are questioned—a phenomenon known as the "liar's dividend." In this case study, we examine how the July 2025 death of rock icon Ozzy Osbourne became a focal point for deepfake-driven misinformation and public speculation. Following a seated farewell concert in Birmingham, multiple synthetic videos surfaced, including one in which a digitally recreated Osbourne claimed he knew he was about to die. The clips sparked speculation about assisted suicide, prompting his daughter Kelly to publicly denounce the videos as fake and criticise those who shared them. When Osbourne died two weeks later, some commentators treated the deepfake as prophetic, fuelling conspiracy theories and amplifying public grief. This case illustrates broader ethical and governance challenges related to generative AI. Voice cloning and face-swapping services can create convincing media from minimal training data, yet developers rarely address issues of consent or privacy when sourcing material. Psychological factors such as fear of missing out (FOMO) encourage the viral spread of sensational content without verification. This paper’s primary contribution is a theoretical synthesis, which integrates existing technical, psychological, and governance perspectives on deepfake-driven misinformation through a single illustrative case study. Effective countermeasures must combine technical innovations—such as blockchain-based provenance tracking and robust detection—with clear policy frameworks that regulate data use and require transparent labelling of synthetic media. Public education remains essential to help individuals recognise deepfakes and preserve trust in authentic digital communication.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Alexander Pfeiffer, Nanditha Krishna, Thomas Wernbacher, Walter Seböck

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.