What Culture is ChatGPT’s AI?
DOI:
https://doi.org/10.34190/eccws.23.1.2364Keywords:
Artificial intelligence, military planning, ChatGPT, cultural parameters, societal implications of technologyAbstract
Artificial intelligence (AI) is increasingly used in many fields. It is widely perceived as an intelligent system that does not just follow algorithms but can demonstrate independent judgment. AI is especially important in handling complex tasks. The responses from the most popular AI chat interface, Chat Generative Pre-Trained Transformer (ChatGPT), are used for guiding decision-making processes and can provide informative answers or recommendations for a wide variety of scenarios. Such scenarios can include job applicants screening or planning for military strategizing. However, similar to human intelligence, which is characterized by cultural biases affecting thought processes and interactions, AI's outputs may also be influenced by inherent cultural biases, whether programmed or incidental, potentially leading to inappropriate outcomes. Given that AI is often used to assist or replace human decision-making, it is particularly important to examine its potential cultural biases. This study aims to assess the cultural bias of ChatGPT by comparing the responses of ChatGPT with established cultural indices, employing the cultural parameters defined by House et al. (2004) and Hofstede (2001). The methodology involves selecting specific cultural parameters, formulating a set of questions representative of these parameters, and analyzing ChatGPT's responses. By using appropriate statistical methods, this study intends to compare ChatGPT's manifested culture with the known values of existing cultures as defined by the GLOBE and Hofstede parameters.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 European Conference on Cyber Warfare and Security
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.