PHANTOMATRIX: Explainability for Detecting Gender Bias in Affective Computing
DOI:
https://doi.org/10.34190/icgr.8.1.3199Abstract
The PHANTOMATRIX project is a research incubator running at the International University of Applied Sciences and aims to advance the field of Human-Machine Interaction by integrating machine learning (ML) techniques to predict emotional states using physiological and facial expression data within Virtual Reality environments. A major focus of the PHANTOMATRIX project is on employing trustworthy ML models by using explainable AI (XAI) methods that allow to rank features according to their predictive power, which aids in understanding the most influential factors in emotional state predictions. In addition, a comparative analysis of XAI techniques to emotion prediction models allows us to assess and correct for the effect of gender on the predictive performance. As affective computing is a highly sensitive research arena, it is of outmost importance to ensure bias free models. Key XAI methods such as Deep Taylor Decomposition (DTD), and SHapley Additive exPlanations (SHAP) are employed to clarify the contributions of features towards model predictions, providing insights into how specific signals influence emotion detection across individuals. This allows for a comprehensive comparison of different XAI approaches and their utility in gender bias detection and mitigation. To further our understanding of gender dynamics within emotional predictions, we develop intuitive visualizations that graphically represent the link between multimodal input data and the resulting emotional predictions to support the interpretation of complex model outputs and to make them more accessible not only to researchers but also to novice users of the system. Our background research demonstrates the effectiveness of XAI methods in identifying and mitigating gender bias in emotion prediction models. By applying XAI, the project reduces the influence of gender-based disparities in affective computing, leading to more equitable model performance across demographics. This research not only highlights the importance of transparent, bias-free AI-affect models but also sets a foundation for future developments in responsible affective computing. The findings contribute to advancing trust in AI-driven emotion analysis, promoting fairer and more inclusive applications of this highly relevant technology.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Conference on Gender Research

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.