Learning Machine Learning with a Game


  • Christoph Lürig University of Applied Science Trier




strategy gamers, reinforcement learning, explainable AI


AIs playing strategic games have always fascinated humans. Specifically, the reinforcement learning technique Alpha Zero (D.Silver, 2016) has gained much attention for its capability to play Go, which was hard to crack problem for AI for a long time. Additionally, we see the rise of explainable AI (xAI), which tries to address the problem that many modern AI decision techniques are black-box approaches and incomprehensible to humans. Combining a board game AI for the relatively simple game Connect-Four with explanation techniques offers the possibility of learning something about an AI's inner workings and the game itself. This paper explains how to combine an Alpha-Zero-based AI with known explanation techniques used in supervised learning. Additionally, we combine this with known visualization approaches for trees.  Alpha-Zero combines a neuronal network and a Monte-Carlo-Search-Tree. The approach we present in this paper focuses on two explanations. The first explanation is a dynamic analysis of the evolving situation, primarily based on the tree aspect, and works with a radial tree representation (Yee et al., 2001). The second explanation is a static analysis that tries to identify the relevant situation elements using the Lime (Local Interpretable Model Agnostic Explanations) approach (Christoforos Anagnostopoulos, 2020). This technique focuses primarily on the neuronal network aspect. The straightforward application of Lime towards the Monte-Carlo-Search-Tree approach would be too compute-intensive for interactive applications. We suggest a modification to accommodate search trees and sacrifice the model agnosticism specifically. We use a weighted Lasso-based approach on the different board constellations analyzed in the search tree by the neuronal network to get a final static explanation of the situation. Finally, we visually interpret the resulting linear weights from the Lasso analysis on the game board. The implementation is done in Python using the PyGame library for visualization and interaction implementation. We implemented the neuronal networks with PyTorch and the Lasso analysis with Scikit Learn. This paper provides implementation details on an experimental approach to learning something about a game and how machines learn to play a game.