Design Lessons from Building Deep Learning Disinformation Generation and Detection Solutions


  • Clara Maathuis Open University
  • Iddo Kerkhof Independent Researcher
  • Rik Godschalk Independent Researcher
  • Harrie Passier Open University



social media manipulation, disinformation, misinformation, security awareness, machine learning, deep learning


In its essence, social media is on its way of representing the superposition of all digital representations of human concepts, ideas, believes, attitudes, and experiences. In this realm, the information is not only shared, but also {mis, dis}interpreted either unintentionally or intentionally guided by (some kind of) awareness, uncertainty, or offensive purposes. This can produce implications and consequences such as societal and political polarization, and influence or alter human behaviour and beliefs. To tackle these issues corresponding to social media manipulation mechanisms like disinformation and misinformation, a diverse palette of efforts represented by governmental and social media platforms strategies, policies, and methods plus academic and independent studies and solutions are proposed. However, such solutions are based on a technical standpoint mainly on gaming or AI-based techniques and technologies, but often only consider the defender’s perspective and address in a limited way the social perspective of this phenomenon becoming single angled. To address these issues, this research combines the defenders’ perspective with the one of the offenders by (i) building a hybrid deep learning disinformation generation and detection model and (ii) capturing and proposing a set of design recommendations that could be considered when establishing patterns, requirements, and features for building future gaming and AI-based solutions for combating social media manipulation mechanisms. This is done using the Design Science Research methodology in Data Science approach aiming at enhancing security awareness and resilience against social media manipulation.