What Culture is ChatGPT’s AI?





Artificial intelligence, military planning, ChatGPT, cultural parameters, societal implications of technology


Artificial intelligence (AI) is increasingly used in many fields. It is widely perceived as an intelligent system that does not just follow algorithms but can demonstrate independent judgment. AI is especially important in handling complex tasks. The responses from the most popular AI chat interface, Chat Generative Pre-Trained Transformer (ChatGPT), are used for guiding decision-making processes and can provide informative answers or recommendations for a wide variety of scenarios. Such scenarios can include job applicants screening or planning for military strategizing. However, similar to human intelligence, which is characterized by cultural biases affecting thought processes and interactions, AI's outputs may also be influenced by inherent cultural biases, whether programmed or incidental, potentially leading to inappropriate outcomes. Given that AI is often used to assist or replace human decision-making, it is particularly important to examine its potential cultural biases. This study aims to assess the cultural bias of ChatGPT by comparing the responses of ChatGPT with established cultural indices, employing the cultural parameters defined by House et al. (2004) and Hofstede (2001). The methodology involves selecting specific cultural parameters, formulating a set of questions representative of these parameters, and analyzing ChatGPT's responses. By using appropriate statistical methods, this study intends to compare ChatGPT's manifested culture with the known values of existing cultures as defined by the GLOBE and Hofstede parameters.

Author Biographies

Juhani Rauhala, University of Jyväskylä

Juhani Rauhala (Eur Ing, PhD) is a Research Affiliate in the Faculty of Information Technology at the University of Jyväskylä. He has over ten years' experience in the telecommunications industry and has been awarded two patents. His wide research interests include information privacy, cybersecurity, unorthodox weaponization, and technology abuse.

Tong Xin, Queen Mary University of London, London, United Kingdom

Tong Xin is a Research Associate in the School of Electronic Engineering and Computer Science at the Queen Mary University of London. Her research interests lie in information security behaviors, cyber threat intelligence, information security investment decision-making.