“Smart” Psychological Operations in Social Media: Security Challenges in China and Germany


  • Darya Bazarkina Institute of Europe of the Russian Academy of Sciences
  • Darya Matyashova School of International Relations, Saint Petersburg State University




malicious use of artificial intelligence, psychological operations, social media, Germany, China


Artificial intelligence (AI) is actively being incorporated into the communication process, as AI rapidly spreads and becomes cheaper for companies and other actors to use. AI has traditionally been used to run social media. It is used in the various platforms’ algorithms, bots and deepfake technology, as well as for the purpose of monitoring content and targeting instruments. However, a variety of actors are now increasingly using AI technology, at times with malicious intent. For example, terrorist organizations use bots on social networks to spread their propaganda and recruit new fighters. The rise of crimes involving AI is growing at a rapid pace. The impact of this type of crime is extremely negative – mass protests which demand the restriction of the use of technology, the involvement of manipulated persons in criminal groups, the destruction of the reputation of victims of “smart” slander (sometimes leading to threats to their life and health), etc. Combating these phenomena is a task which falls to security agencies, but also civil society institutions, the academic community, legislators, politicians, and the business community, since the complex nature of the threat requires complex solutions involving the participation of all interested parties. This paper aims to find answers to the following research questions: 1) what are the current threats to the psychological security of society caused by the malicious use of AI on social networks? 2) how do malicious (primarily non-state) actors carry out psychological operations through AI on social networks? 3) what impacts (behavioral, political, etc.) do such operations have on society? 4) how can the psychological security of society be protected using existing approaches as well as innovative ones? The answer to this last question is inextricably linked to the possibilities offered by international cooperation. This paper examines the experiences of Germany and China, two leaders in the field of AI which happen to have different socio-political systems and approaches to a number of international issues. The paper concludes that by increasing international cooperation, it is possible to counter psychological operations through AI more effectively and thereby protect society’s interests.

Author Biography

Darya Matyashova, School of International Relations, Saint Petersburg State University

Darya Matyashova is a masters’ student at the School of International Relations, Saint Petersburg State University. Darya is an author of more than 20 publications on AI communication aspects and Asia Pacific international conflicts. Her research interests include cybersecurity, normative and soft power, and politics of contemporary rising powers.