Perceived Factors of Trustworthiness in Generative Artificial Intelligence (GenAI): Towards an Understanding of how to Assess and Build Trustworthiness

Authors

DOI:

https://doi.org/10.34190/eckm.26.1.3958

Keywords:

Trust, Generative Artificial Intelligence (GenAI), Trustworthiness, Model of Trust, Organizational

Abstract

Given the benefits and risks associated with GenAI adoption in organizations, many academics and practitioners have stressed the importance of understanding how humans come to trust these technologies and the information and knowledge (e.g., solutions/decisions) they produce. The objective of this paper is to further examine human trust in AI technologies through the lens of a widely accepted organizational trust theory and model developed by Mayer, Davis, and Schoorman. More specifically, this paper focuses on developing a better understanding of perceived factors of GenAI trustworthiness since assessing trustworthiness is a critical determinant of trust. Building on the existing theory and model, it is proposed that an individual's perception of one or more of the following dimensions of trustworthiness - ability, integrity, and benevolence - will determine how trustworthy they find GenAI to be. Ability (or competence) refers to the trustee’s specific skills, knowledge, and expertise required in a specific domain. Integrity reflects the trustee’s sound values or principles (e.g., fairness, consistency, justice). Benevolence is an altruistic loyalty that reflects the trustee’s concern for the welfare, needs, desires, and interests of the individual over organizational or profit motives. Many researchers have proposed assessments related to GenAI ability, but integrity and benevolence are more difficult to assess, as technologies do not intrinsically embody human values or altruistic behaviors. Consequently, other parties within the organizations, such as AI designers and developers, strategic decision-makers, or the organization may be conflated into perceptions of these dimensions. The paper continues by briefly discussing how emotions and organizational culture may influence individuals' perceptions of trustworthiness and concludes by suggesting potential directions and strategies for building and representing each dimension of perceived trustworthiness in the context of GenAI.

Author Biographies

Max Evans, McGill University

Max Evans is an Associate Professor and Graduate Program Director in the School of Information Studies at McGill University. His research area is information and knowledge management, with a focus on affective, cognitive, social, and technological factors influencing organizational information and knowledge sharing (principally concentrating on interpersonal and organizational trust).

Anthony K.P. Wensley, University of Toronto

Anthony Wensley is an Emeritus Professor at the University of Toronto, prior to retirement he was Founding Director of the Institute of Communication, Culture, Information and Technology. His research focuses on the design and implementation of digital technologies in the domains of enterprise computing, knowledge management and Intellectual Capital.

Godwin B. Akrong, McGill University

Godwin Akrong is a PhD Candidate in the School of Information Studies at McGill University. His research area is technology, trust, and social impact, with a focus on understanding how people perceive and trust generative AI systems—like ChatGPT and Copilot—especially in higher education settings.

Downloads

Published

2025-08-29