Explainable AI in Insider Financial Fraud Detection Models: A Review of Transparency and Trust

Authors

DOI:

https://doi.org/10.34190/iccws.21.1.4416

Keywords:

explainable AI, transparent model, model interpretability, financial fraud detection, insider fraud detection

Abstract

Financial and insider fraud increasingly intersect with broader cybercrime ecosystems, creating attack vectors that undermine national cyber resilience and the integrity of digital financial infrastructures. As organizations turn to machine learning (ML) and deep learning (DL) models for automated fraud and insider-threat detection, the opacity of these systems presents strategic risks for cyber defense: unexplainable alerts weaken analyst trust, complicate incident response, and challenge regulatory and forensic accountability. This study presents a systematic review of 107 empirically validated works (2015–2025) examining how Explainable Artificial Intelligence (XAI) techniques enhance transparency, trustworthiness, and operational readiness in AI-driven fraud detection systems. Using a mixed bibliometric–thematic methodology, the review maps the evolution of ML/DL architectures, XAI adoption patterns, evaluation practices, and dataset limitations within security-critical environments. The findings highlight a sector-wide dependence on post-hoc feature attribution and reveal emerging shifts toward intrinsic interpretability through attention mechanisms and hybrid temporal models. Despite progress, gaps persist: limited use of sequential behavioral models, narrow evaluation metrics, and overreliance on structured datasets weaken real-world resilience against adaptive adversaries. To address these challenges, the paper proposes a Three-Pillar Framework: Algorithmic Transparency, Evaluation Accountability, and Data Traceability that positions explainability as a foundational architectural property for cyber defense systems. By aligning model interpretability with security operations, regulatory requirements, and analyst cognition, the framework strengthens organizational readiness against insider threats, financial fraud, and AI-targeted adversarial manipulation, key considerations in modern cyber warfare and security operations.

Author Biographies

William Leslie Brown-Acquaye , Ghana Communication Technology University

Dr. William Leslie Brown-Acquaye holds a Ph.D. in Automation and Control of Industrial Technological Processes from Tambov State Technical University and an MSc. in Information Systems and Technology from Tver State Technical University, Russia. He is a Senior Lecturer and the Dean of the Faculty of Computing and Information Systems at the Ghana Communication Technology University.

Forgor Lempogo, Ghana Communication Technology University

Dr. Forgor Lempogo holds an MSc in Information Systems and Technology and a PhD in Automation and Control from the Tver State Technical University, Russian Federation. He is a Senior Lecturer and Vice-Dean of the Faculty of Computing and Information Systems (FOCIS) at the Ghana Communication Technology University, Accra, Ghana.

Kwame Bell-Dzide, Ghana Communication Technology University

Kwame Bell-Dzide holds an MSc in Management Information Systems from the Ghana Communication Technology University. He is currently pursuing a PhD in Computer Science at the same institution, with a focus on Information Security. His research explores privacy-preserving frameworks for wearable and IoT devices, emphasizing lightweight encryption and multi-modal biometric authentication.

Israel Edem Agbehadji, Ghana Communication Technology University; Durban University of Technology (DUT)

Dr. Israel Edem Agbehadji is an academic and researcher with over a decade of experience, specializing in bio-inspired algorithms, artificial intelligence, and their applications in environmental monitoring. His research includes developing algorithms for integrating explainable artificial intelligence in environmental monitoring models and creating smart waste management solutions

Downloads

Published

19-02-2026