Student Learning Performance Evaluation: Mitigating the Challenges of Generative AI Chatbot Misuse in Student Assessments
DOI:
https://doi.org/10.34190/ecel.23.1.2567Keywords:
Assessments, Education, Education Policy, Generative AI, Learning Performance, Performance EvaluationAbstract
Since the launch of ChatGPT, a growing number of generative artificial intelligence (AI) chatbots have entered the market. Although chatbots have the potential to help students learn, misusing them to complete assessments raises questions about the authenticity of the work and puts students at risk of academic misconduct. Given the crucial role of assessments in evaluating students’ learning performance, uncertainties about the authenticity of the work call into question the extent to which students have achieved the intended learning outcomes. This study conducted a thematic analysis to provide an overview of the challenges that chatbot misuse may pose to student learning performance evaluation, followed by the various mitigation strategies to overcome these challenges. This study searched the Education Resources Information Centre (ERIC) database for peer-reviewed articles published in scholarly journals after 30 November 2022 (the launch date of ChatGPT, as this study focuses on generative AI rather than other types of AI) and until 30 April 2024. The thematic analysis of 17 articles identified five major themes (and respective sub-themes) in the discussions of these articles, i.e., reasons students use chatbots for assessments, challenges that chatbots may pose to student learning performance evaluation, mitigation strategies, detection strategies, and counter-detection strategies. As chatbots become more prevalent and powerful, the study's findings provide education stakeholders with insightful information on the implications of students misusing chatbots for assessments and how this affects their learning performance evaluation.