What do we evaluate in serious games? A systematic review

Authors

  • Ernesto Pacheco-Velazquez Tecnologico de Monterrey
  • Andre Bester University of Twente
  • Lucia Rabago-Mayer University of Twente
  • Virginia Ro Tecnologico de Monterrey

DOI:

https://doi.org/10.34190/ecgbl.17.1.1627

Keywords:

Educational Innovation, Higher Education, Serious Games, Game-Based Learning, Simulators

Abstract

Serious games have emerged as an invaluable tool in education, revolutionizing the way students learn and engage with complex concepts. These games combine entertainment with educational content, creating immersive and interactive experiences that enhance learning outcomes. This strategy has positioned themselves as a powerful educational tool recommended for the new generations due to their benefits in terms of motivation, engagement, active learning, development of skills, and adaptation to diverse learning styles. By integrating serious games into educational programs, educators can enhance meaningful learning, foster relevant skills, and prepare students to tackle the challenges of the 21st century. The evaluation of serious games is important for various reasons. For example, it helps determine if a serious game meets its educational objectives and truly promotes learning and the development of specific skills. It also provides feedback on the design, gameplay, effectiveness, and other aspects of the serious game, allowing developers to identify strengths and areas for improvement to optimize the learning experience. Evaluations help determine if the serious game appropriately caters to the needs and characteristics of users, if it is suitable for the target group, if it is accessible to individuals with different abilities, and if it provides an appropriate level of challenge to promote engagement and learning. Ultimately, evaluations provide validation and credibility to serious games as educational tools. This study shows a systematic review of the factors that appear most frequently evaluated, the methodology used, and discusses the possibility of adding new factors and points out the need to consider the opinion of other users to improve the evaluation of these resources.

Author Biography

Andre Bester, University of Twente

Andre Bester is a research and development engineer at the BMS Lab, University of Twente. He has an M-Eng in Electronics from the University of Stellenbosch and is registered as a Chartered Engineer in the United Kingdom. Andre has more than 30 years’ experience in sensor development, cognitive psychology research and software development. His research interests are in machine learning applied to human behavior and serious/simulation gaming. 

Downloads

Published

2023-09-29