Teaching Bayesian Reasoning as a Pathway Toward Active Thinking and Explainable AI

Authors

  • Dimitrios Lappas University of the Aegean
  • Panagiotis Karampelas Hellenic Air Force Academy
  • Giorgos Fesakis

DOI:

https://doi.org/10.34190/icair.5.1.4325

Keywords:

Bayesian Networks, Explainable AI, Probabilistic Reasoning

Abstract

Decision-making under uncertainty requires not only computational tools but also critical thinking skills that allow individuals to evaluate assumptions, weigh evidence, and mitigate automation bias. While many contemporary AI systems operate as opaque black-box models, Bayesian Networks (BNs) provide a transparent and explainable alternative, making them well-suited for both decision support and AI education. This paper introduces an educational framework where learners construct, parameterize, and interpret Bayesian models to address authentic problems, such as classifying suspicious emails in cybersecurity. By explicitly modelling variables, dependencies, and prior assumptions, BNs engage students in probabilistic reasoning while promoting metacognitive reflection and critical evaluation of their decision-making process. The contribution of this work is threefold: (1) it positions Bayesian Networks as both a mathematical reasoning tool and an accessible entry point into explainable AI; (2) it integrates probability theory, critical thinking, and transparency into a unified framework for Responsible AI education; and (3) it demonstrates how transparent reasoning can support human-in-the-loop decision-making and reduce automation bias. While the framework does not claim to solve the general challenges of explainability in complex AI models, it offers a concrete and transferable pathway for cultivating active thinkers capable of designing, interpreting, and questioning AI-assisted decisions.

Downloads

Published

2025-12-04