International Conference on AI Research https://papers.academic-conferences.org/index.php/icair <p>The International Conference on AI Research (previously known as the European Conference on the Impact of AI and Robotics) has been run on an annual basis since 2021. Conference Proceedings have been published in 2021 and 2022 with a post Covid break in 2023 and authors have been encouraged to upload their papers to university repositories. In addition the proceedings are indexed by a number of indexing bodies.</p> <p>From 2022 all conference proceedings published by ACIL are fully open access. Individual papers and full proceedings can be accessed via this system.</p> <p><strong>PLEASE NOTE THAT IF YOU WISH TO SUBMIT A PAPER TO THIS CONFERENCE YOU SHOULD VISIT THE CONFERENCE WEBSITE AT<a href="https://www.academic-conferences.org/conferences/icair/"> https://www.academic-conferences.org/conferences/icair/</a> THIS PORTAL IS FOR AUTHORS OF ACCEPTED PAPERS ONLY.</strong></p> Academic Conferences International en-US International Conference on AI Research 3049-5628 The 4I Model of Benefits and Its Integration with the Technology Acceptance Model (TAM): Teenagers’ Perspectives on Using ChatGPT for Homework https://papers.academic-conferences.org/index.php/icair/article/view/3205 <p>The introduction of ChatGPT has provided students with a tool similar to a fairy-tale djinn, giving the ability to simplify homework and projects with just a few keystrokes. This study examines teenagers' perspectives on using ChatGPT for homework, exploring both the perceived benefits and concerns. Based on content analysis of responses from 141 teenagers aged 14 to 16 from grammar schools across various federal regions in Germany, we introduce the <em>4I Model of Benefits</em>, identifying four key advantages: Information, Inspiration, Improvement, and Immediacy. Integrated within the Technology Acceptance Model (TAM) framework, this model clarifies factors influencing teenagers’ acceptance of AI text generators in education. Findings indicate that, while most teenagers recognize the distinct benefits of using ChatGPT, concerns about learning efficacy and over-reliance underscore the need for balanced educational integration. By enriching the TAM framework, the 4I Model identifies the specific benefits of ChatGPT that drive its acceptance and provides valuable insights for educators, policymakers, and developers.</p> Zinaida Adelhardt Thomas Eberle Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 1 9 10.34190/icair.4.1.3205 Adopting Artificial Intelligence in Organisations: A Closer Look https://papers.academic-conferences.org/index.php/icair/article/view/3129 <p>Artificial intelligence (AI) is increasingly being adopted in different types of organisations and is attracting the attention of various actors. In this context, the analysis aims to provide an overview of the most relevant aspects of the adoption of AI technology solutions in organisations. To this end, the analysis adopted the archival research method and carried out a study of recent documents/analyses proposed by leading consulting firms. These players accumulate considerable application knowledge and are, therefore, special sources. To provide an in-depth and up-to-date picture, the analysis considered a range of archival information (such as reports, articles, insights, technical notes, etc.) published by around 15 organisations in the consulting sector since the beginning of 2023. The study highlights the main opportunities/risks associated with the introduction of AI. The findings cover various aspects, including AI investments, classification of organisations adopting AI, main challenges in AI initiatives, most common areas of AI use, and changes in work. The results may have some limitations due to bias in the identification of documents, as they were derived from direct searches and queries on the search engines of the consulting firms' websites. The audience and implications are broad, and this is a value of the work. AI novices studying the adoption and implementation of AI have a picture of the relevant dimensions on which further in-depth studies can be developed. In addition, entrepreneurs, managers, and policymakers will have an overview of the main threats/opportunities of AI and elements to support decision-making.</p> Massimo Albanese Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 10 19 10.34190/icair.4.1.3129 Generative AI Learning Environment for Non-Computer Science Engineering Students: Coding Versus Generative AI Prompting https://papers.academic-conferences.org/index.php/icair/article/view/3220 <p>This paper addresses the need for a positive and effective learning environment for engineering students in non-computer science fields to grasp Generative AI principles while navigating the intricate balance between its application and developmental insights. Drawing from pedagogical theories and cognitive science, especially Leinenbach and Corey (2004)’s Universal Design for Learning, this study proposes a framework tailored to the unique needs and backgrounds of engineering students. The framework emphasizes active learning strategies, collaborative problem-solving, and real-world applications to engage learners in meaningful experiences with Generative AI concepts. The central learning context is a M.Sc. program in management engineering with a course/training opportunity in Machine Learning Fundamentals using Python based on Google Collab. The introduction of Generative AI is based on selected Google libraries for Python. Furthermore, this paper explores various instructional approaches and tools to scaffold students' understanding of Generative AI, including hands-on projects, case studies, and interactive simulations. It also addresses ethical considerations and societal implications associated with Generative AI deployment, encouraging students to critically reflect on the broader impacts of their technical decisions. Through a synthesis of pedagogical best practices and AI development principles, this paper contributes to the ongoing discourse on effective AI education for non-computer science disciplines. By embracing a holistic approach that integrates theory with practical application, educators can empower engineering students to harness the transformative potential of Generative AI while navigating its complexities responsibly and ethically.</p> Robert Alphinas Torben Tambo Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 20 29 10.34190/icair.4.1.3220 An Exploratory Study on the Role of Generative AI in Designing Educational Board Games https://papers.academic-conferences.org/index.php/icair/article/view/3081 <p>The educational landscape is transforming due to new skill demands, fragmented attention spans, evolving communication patterns, and increased information access. Game-based learning (GBL) has emerged as a potential solution, fostering subject competency and broader skills. However, integrating games into education faces challenges for educators and schools. This study examines Generative AI's (GenAI) role in facilitating board game use in education, based on a case study in an Italian high school implementing a GenAI-enhanced board game for foreign language teaching. Two state-of-the-art GenAI technologies assisted in the whole GBL design process. Findings indicate GenAI can streamline board game integration through flexible rule adaptation, customized artifact production, and rapid content creation, reducing educator workload, as well as extend their capabilities with novel, creative solutions. Benefits are more pronounced when GenAI use is based on an overarching educational strategy. Study's findings have implications for educators and researchers seeking to effectively integrate GBL and GenAI in educational settings.</p> Fabrizio Amarilli Oxana Timakova Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 30 39 10.34190/icair.4.1.3081 Artificial Intelligence Life Cycle: The Detection and Mitigation of Bias https://papers.academic-conferences.org/index.php/icair/article/view/3131 <p>The rapid expansion of Artificial Intelligence(AI) has outpaced the development of ethical guidelines and regulations, raising concerns about the potential for bias in AI systems. These biases in AI can manifest in real-world applications leading to unfair or discriminatory outcomes in areas like job hiring, loan approvals or criminal justice predictions. For example, a biased AI model used for loan prediction may deny loans to qualified applicants based on demographic factors such as race or gender. This paper investigates the presence and mitigation of bias in Machine Learning(ML) models trained on the Adult Census Income dataset, known to have limitations in gender and race. Through comprehensive data analysis, focusing on sensitive attributes like gender, race and relationship status, this research sheds light on complex relationships between societal biases and algorithmic outcomes and how societal biases can be rooted and amplified by ML algorithms. Utilising fairness metrics like demographic parity(DP) and equalised odds(EO), this paper quantifies the impact of bias on model predictions. The results demonstrated that biased datasets often lead to biased models even after applying pre-processing techniques. The effectiveness of mitigation techniques such as reweighting(Exponential Gradient(EG)) to reduce disparities was examined, resulting in a measurable reduction in bias disparities. However, these improvements came with trade-offs in accuracy and sometimes in other fairness metrics, identified the complex nature of bias mitigation and the need for precise consideration of ethical implications. The findings of this research highlight the critical importance of addressing bias at all stages of the AI life cycle, from data collection to model deployment. The limitation of this research, especially the use of EG, demonstrates the need for further development of bias mitigation techniques that can address complex relationships while maintaining accuracy. This paper concludes with recommendations for best practices in Artificial Intelligence development, emphasising the need for ongoing research and collaboration to mitigate bias by prioritising ethical considerations, transparency, explainability, and accountability to ensure fairness in AI systems.</p> Ashionye Aninze Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 40 49 10.34190/icair.4.1.3131 Designing an Artificial Intelligence Maturity Model for Human Resources (HR-AIMM) https://papers.academic-conferences.org/index.php/icair/article/view/3070 <p>Artificial intelligence (AI) has the potential to change the world of work radically. Wherever information processing is involved, AI can be integrated into processes with added value. From the perspective of Human Resource (HR) management, this implies three things: first, business models and performance processes in the company will undergo change; second, employee requirements will change; and third, HR processes will change. While the literature describes various AI maturity models, there has been no dedicated consideration of HR management. This article, therefore, aims to identify relevant influencing factors for an AI-orientated approach to HR management and to describe these in more detail using maturity levels in a Human Resources Artificial Intelligence Maturity Model (HR-AIMM). The resulting HR-AIMM consists of eleven dimensions. These include anchoring the AI topic in the corporate and HR strategy, its use in selected HR processes, considering ethical, data-related, and infrastructural principles, and organisational, cultural, and competence-related anchoring. The characteristics of these factors enable the identification of four maturity levels for using AI in HR management: from a curious start to the level of holistic integration. Our framework supports researchers and companies in understanding and evaluating the factors influencing the professional application of AI in HR management.</p> Sascha Armutat Malte Wattenberg Nina Mauritz Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 50 58 10.34190/icair.5.1.3070 Balancing AI in SMEs: Overcoming Psychological Barriers and Preserving Critical Thinking https://papers.academic-conferences.org/index.php/icair/article/view/3143 <p style="font-weight: 400;">The rapid integration of artificial intelligence (AI) into various business sectors is transforming the operational landscape. This paper focuses on small and medium-sized enterprises (SMEs) and examines how they can achieve an optimal psychological balance when using AI. The goal is to encourage employees to adopt AI while maintaining critical thinking and autonomy. This research paper uses theoretical analysis as the research method, drawing on established psychological theories such as the Technology Acceptance Model and the Job Demands-Resources Model. The findings are used to propose a model for determining an optimal balance for SMEs. The analysis reveals several psychological barriers related to anxiety, which can lead to increased stress, lower motivation, and general resistance to the use of AI tools. Conversely, once the fear is overcome, there is a risk of over-reliance on AI. Therefore, it is important to provide training that helps employees recognize the benefits of AI and its impact on their tasks, critically evaluate AI recommendations, and find a balance between automated guidance with human judgement. Finding the optimal balance in the use of AI is critical. Fostering a culture of continuous learning, and adaptability, together with supportive leadership, can help maintain this balance. In summary, through a strategic application of psychological theories, SMEs can harness the potential of AI to improve work performance and mitigate labour shortages while developing motivated and critically thinking employees.</p> Madeleine Block Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 59 66 10.34190/icair.4.1.3143 The new era of Technology Mysticism: Generative Artificial Intelligence and its effects https://papers.academic-conferences.org/index.php/icair/article/view/3159 <p>Generative Artificial Intelligence (GenAI) started to disrupt many application areas in the domain of information technology and is developing at a rapid pace. GenAI exhibits different systemic characteristics, it is a trained technology as opposed to the engineered technologies that have been developed in the IT domain in the past. As trained technologies are heavily depending on huge amounts of training data, their behaviour is not deterministic but of a stochastic nature, leading to a limited understanding of those systems. &nbsp;their behaviour is emergent. Assumptions and beliefs in the abilities of technologies by their users create a new age of mysticism that can be compared with our past and the relation of people in medieval times to the age of Mysticism with respect to their understanding of nature and their surroundings. We are facing AI today like our ancestors faced incomprehensible natural phenomena. This article is discussing the resulting effects from a technical but also from a philosophical perspective.</p> Karsten Böhm Jürgen Sammet Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 67 74 10.34190/icair.4.1.3159 Artificial Intelligence and Trend Forecasting in the Fashion Industry: Ethical and Anticipated Ethical Issues https://papers.academic-conferences.org/index.php/icair/article/view/3186 <p>Trend forecasting within the fashion industry is aimed at predicting the of future popular styles, materials, colors, and all things related to the development of fashion. Trend forecasting is based on information collected about past, present, and projected future developments in the fashion world. AI, as it has been applied in other areas in society, is now being applied within the fashion industry. This analysis will focus on AI being applied to trend forecasting within the fashion industry. Using AI in trend forecasting brings a number of advantages for those in the fashion industry while at the same time raising a number of ethical concerns. Three significant ethical concerns that are related to the employment of AI in the fashion industry are, the invasion of privacy from the data mining required to gather data to make predictions about fashion trends, the consequences for businesses of incorrect trend predictions, and the anticipated environmentally unsustainable nature of fast fashion due to the overconsumption that consumers practice and that is increasingly promoted by AI. Overconsumption is the product of how AI is involved in rapidly forecasting new trends in fashion that consumers then follow. Addressing all of these ethical issues is needed because fast fashion has continued to be demanded by both consumers and investors. New technologies, particularly AI, have allowed fashion brands to develop methods that allow them to be ahead of rapidly changing fashion trends and put high-demand products in stores faster, increasing the prevalence of fast fashion within the fashion industry. Questions now arise about how to address these ethical concerns. Important questions for the fashion industry include what issues will the use of AI in trend forecasting create in the long run within the fashion industry, and how can proactive action be taken within the fashion industry to prevent future ethical issues in the fashion industry? This paper which employs empirical methods to describe the fashion industry while also employing conceptual analysis related to philosophical ethics. The goal of the analysis is to provide, an informative ethical analysis of AI in trend forecasting, while also attempting to develop ethical guidance for concerns involving the use of AI in trend forecasting. The paper perform an anticipatory ethical analysis that attempts to address future concerns about fashion and will close by drawing conclusions about the direction of future analysis related to the application of AI in the fashion industry.</p> Richard Wilson Olivia Bowles Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 75 81 10.34190/icair.4.1.3186 Advancing Corporate Social Responsibility in AI-driven Human Resources Management: A Maturity Model Approach https://papers.academic-conferences.org/index.php/icair/article/view/3016 <p style="font-weight: 400;">Artificial Intelligence (AI) in the corporate environment has been the subject of several current social debates of many scientific studies in reshaping human resource management (HRM) practices. There is a significant research gap in understanding how artificial intelligence (AI) can be ethically and effectively integrated into human resources management (HRM), particularly in relation to corporate social responsibility (CSR). This study aims to address this gap by proposing a maturity model to assess and guide the responsible implementation of AI in HRM practices. The research goal focuses on how AI can be aligned with CSR principles to ensure ethical, transparent, and socially responsible usage in organizational settings. The methodology includes a comprehensive review of 52 academic papers, employing bibliometrics, network analysis, and thematic content analysis to explore the interplay between AI, CSR, and HRM. These analyses allowed the identification of key ethical concerns and challenges in AI-driven HRM practices, such as bias in AI algorithms, data privacy issues, transparency, and the need for technical proficiency among HR professionals. The findings reveal a five-level AI maturity model, each stage representing progressive alignment with CSR principles. Organizations at lower maturity levels tend to have ad-hoc AI implementations with minimal CSR focus, while those at higher levels demonstrate full integration of ethical AI practices. Additionally, the study highlights the importance of transparency, accountability, and employee empowerment as critical elements for advancing AI maturity in HRM. This research contributes by offering organizations a practical tool to assess and enhance their AI-driven HRM processes through a CSR lens. It also provides a foundation for future research on strategic policy development, ethical AI governance, and continuous improvement in the integration of AI in HRM. Scholars are encouraged to explore these areas further, particularly in understanding how AI can foster not only organizational efficiency but also social responsibility and ethical standards.</p> Marta Bubicz Marcos Ferasso Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 82 90 10.34190/icair.4.1.3016 Bridging the Gap: Practical Challenges and Strategic Imperatives in Adopting Gen AI https://papers.academic-conferences.org/index.php/icair/article/view/3221 <p>In today's rapidly evolving business landscape, Artificial Intelligence (AI) stands as the proverbial 'elephant in the room,' profoundly shaping diverse sectors and contexts. While debates rage among policymakers, practitioners, and politicians about regulating AI's widespread use, it's undeniable that AI represents a long-awaited digital technology poised to revolutionize organizational performance. Amidst the post-COVID era, the allure of AI has intensified, yet the critical question lingers: Can firms effectively harness AI and other technologies, such as Generative AI and Large Language Models (LLMs), to enhance their existing systems? Through a systematic literature review, a clear correlation between firms’ implementation of AI, including the cutting-edge Generative AI, and their ability to adapt to changing market dynamics, drive operational excellence, and unlock new avenues for growth are explored. Moreover, key drivers of AI and digital adoption, such as the imperative for data-driven decision-making, the quest for customer-centricity, and the drive for sustainable business practices are explored. Our research not only highlights the transformative potential of AI and digital technologies but also provides actionable insights for business leaders navigating the complexities of technology adoption. By understanding the motivations, challenges, and strategic imperatives driving firms' technology choices, including the integration of Generative AI and LLMs, organizations can chart a path towards sustainable growth and competitive advantage in the digital age. This study underscores the revolutionary impact of Generative AI in digital transformation, offering a comprehensive understanding of their role in shaping the future of business.</p> Andrea Di Vetta Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 91 98 10.34190/icair.4.1.3221 The Academic Anti-Procrastination Approach: Combining Peer Motivation and Personalized Artificial Intelligence Reminders https://papers.academic-conferences.org/index.php/icair/article/view/3222 <p>Academic procrastination is a pervasive issue that significantly affects college students, leading to increased anxiety, stress, and reduced study efficiency and performance. Despite numerous studies exploring the causes and solutions to reduce procrastination, including the positive effects of peer motivation and technological interventions, the integration of artificial intelligence (AI) interventions through peer motivation and smart reminders remains underexplored. In this study, we conducted a systematic literature review on the causes of academic procrastination, the influence of social motivation, and technical interventions aimed at reducing procrastination. Our review revealed a significant research gap regarding the use of AI reminders and peer motivation to help students mitigate procrastination and enhance productivity. To address this gap, we propose an Academic Anti-Procrastination Approach that integrates peer motivation, social interaction, and AI-driven reminders. This approach utilizes the social networks of college students and incorporates AI tools to create a support system designed to reduce procrastination. We conducted an experiment to evaluate the effectiveness of this approach, using a mixed-methods methodology to analyze the results. Our findings suggest that the approach effectively reduces academic procrastination by harnessing the synergistic effects of peer motivation and AI-driven interventions. Quantitative results showed a p-value of 0.0017 in the experimental group, indicating a statistically significant decrease in procrastination scores after the intervention. Qualitative semi-structured interview results revealed that all participants found the personalized AI reminders helpful, with 87% stating that social motivation and interaction motivated them to complete tasks. Additionally, 80% of participants indicated that the concepts behind the approach would be useful in combating procrastination and expressed a willingness to use the approach and join a peer group to reduce their academic procrastination. This study offers practical contributions for combating academic procrastination in college environments. Students can utilize this method to create supportive peer groups and leverage personalized technological support, helping them overcome low efficiency and maintain focus on academic tasks.</p> Xiaojiao Duan Zhaoxia Yi, Yongjia Sun Itamar Shabtai Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 99 107 10.34190/icair.4.1.3222 How Good Is GPT’s “Emojinal Intelligence”? Investigating Emoji Patterns in LLM-Generated Social Media Text https://papers.academic-conferences.org/index.php/icair/article/view/3225 <p>Recent advancement in Large Language Models (LLMs) has opened the prospect of generating text for social media content that mimics human writing. The misuse of these tools presents urgent dilemmas, motivating the need to better understand the structure and patterns of LLM-generated content. Human communication on the Internet has developed relevant linguistic adaptations, including the use of emoji to augment traditional text. This study investigates the ability of one LLM, OpenAI’s GPT-3.5, to replicate human emoji usage in social media contexts. Drawing upon a dataset of nearly three thousand US English human-written tweets, we employed GPT-3.5-Turbo to generate social-media-style content and analyzed the use of emoji in the resulting text. We compared the patterns of emoji usage between the LLM-generated and the human-written datasets, particularly frequency, types of emoji commonly used, and emoji sequences (n-grams). Our results revealed notable differences in all categories. While human-written tweets were more likely to use faces, hearts, and repetitive sequences of emoji, LLM-created content had a broader variety of emoji, with a preference for literal representations of the text’s subject matter, producing diverse and unique emoji combinations.</p> Michael Dunn Kenneth Hopkinson Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 118 114 10.34190/icair.4.1.3225 Online Managerial Tools as Research Tools to Apply Artificial Management: Results of Research https://papers.academic-conferences.org/index.php/icair/article/view/3190 <p>Artificial intelligence (AI) can augment human intelligence in teamwork, however, it is still not clear how to implement artificial management. This rapid development of computer science gives opportunities to replace team managers with robots. However, it is still not possible to employ a robot in a managerial position. Therefore, the aim of this paper is to present a theoretical foundation for such an information system which could be widely used by human managers in their day-to-day work. At the same time, it could collect data on managerial work in order to implement artificial management. The research problem concerns the theoretical assumptions needed to design this solution. Two research questions arise from this research problem: (RQ1) What types of research methods should be used to collect data on managerial work? (RQ2) How to implement research methods into the managerial tools to make the managerial work automated? The research methods used in the paper are: literature studies, technical documentation of online managerial tools created by the author, and a long-term observation of human-managed teams. In Section 2 we present the theoretical foundation of research in management science which is the answer to the first research question (RQ1). Section 3 contains the theoretical assumption of the information system design which is the answer to the second research question (RQ2). In Section 4 we present examples of research conducted by the author as the practical use of answers to both research questions (RQ1 and RQ2). Section 5 includes conclusions on the use of the information system in artificial management. The main contribution of this paper is as follows: Firstly, to answer the first research question (RQ1) about the types of research methods which should be used to collect data on managerial work in order to make them automated. The answer is the mixed method as the most appropriate method to study what a manager really does. Secondly, the paper contains the answer to the second research question (RQ2) on ways of implementing research methods into the online managerial tools aimed at artificial management.</p> Olaf Flak Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 115 125 10.34190/icair.4.1.3190 Generative AI in Writing Workshops: A Path to AI Literacy https://papers.academic-conferences.org/index.php/icair/article/view/3022 <p>The widespread use of generative AI tools which can support or even take over several part of the writing process has sparked many discussions about integrity, AI literacy and changes to academic writing processes. This paper explores the impact of generative artificial intelligence (AI) tools on the academic writing pro-cess, drawing on data from a writing workshop and interviews with university students of a university teacher college in Austria. Despite the widespread assumption that generative AI, such as ChatGPT, is widely used by students to support their academic tasks, initial findings suggest a notable gap in participants' experience and understanding of these technologies. This discrepancy highlights the critical need for AI literacy and underscores the importance of familiarity with the potential, challenges and risks associated with generative AI to ensure its ethical and effective use. Through reflective discussions and feedback from work-shop participants, this study shows a differentiated perspective on the role of generative AI in academic writing, illustrating its value in generating ideas and overcoming writer's block, as well as its limitations due to the indispensable nature of human involvement in critical writing tasks.</p> Sonja Gabriel Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 126 132 10.34190/icair.4.1.3022 Generative AI and Educational (In)Equity https://papers.academic-conferences.org/index.php/icair/article/view/3153 <p>This paper examines the complex relationship between generative artificial intelligence (AI) and educational equity, analysing both the opportunities and challenges presented by these emerging technologies in educational contexts. The paper begins by establishing fundamental distinctions between educational equality and equity, emphasizing how various socioeconomic, cultural, and systemic factors contribute to persistent educational disparities. It then provides a comprehensive overview of generative AI technologies, particularly focusing on Large Language Models (LLMs) and their applications in educational settings. The analysis reveals several promising applications of generative AI for promoting educational equity, including enhanced accessibility features for students with disabilities, personalized learning experiences, and the creation of Open Educational Resources (OER). The paper highlights how AI-assisted tutoring, incorporating Socratic dialogue methods, and AI-generated feedback systems can provide valuable educational support, especially in resource-constrained environments. These technologies demonstrate potential in breaking down traditional barriers to education by offering multilingual support, adaptive learning materials, and immediate feedback mechanisms. However, the paper also addresses significant challenges and risks associated with implementing generative AI in education. These include concerns about digital divides, both in terms of access to technology and digital literacy skills, as well as the potential for AI systems to perpetuate existing biases. The research emphasizes the importance of thoughtful integration of AI technologies in educational settings, suggesting that the most effective approach may be a balanced combination of human instruction and AI-supported learning. By examining these various aspects, the paper contributes to ongoing discussions about how to harness generative AI's potential while ensuring its implementation promotes, rather than hinders, educational equity. The findings have significant implications for educators, policymakers, and educational institutions working to create more equitable learning environments in an increasingly technology-driven world.</p> Sonja Gabriel Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 133 142 10.34190/icair.4.1.3153 Exploring the Transformative Intersection of Artificial Intelligence and Educational Research: K-12 Principals Supporting English Learners https://papers.academic-conferences.org/index.php/icair/article/view/3020 <p>The integration of artificial intelligence (AI) into educational research marks a significant paradigm shift where AI intersects with educational research from diverse perspectives, emphasizing its transformative potential. By leveraging AI technologies, researchers can transcend traditional limitations, thereby enhancing their capabilities to pose more incisive questions, analyze vast datasets, and refine research methodologies, ultimately leading to more impactful outcomes. Within the context of a research endeavor focused on K-12 principals supporting classroom teachers serving English Learners in the United States, we explore how AI algorithms can refine research questions and augment research methodologies, leading to deeper insights and more informed decisions in educational studies. Innovative techniques for optimizing survey questions and methodologies are discussed, showcasing AI's analytical prowess in unlocking new avenues of understanding and leading to deeper insights and more informed decisions in educational research studies. Through advanced data processing techniques, AI unveils patterns, correlations, and insights that may elude traditional analysis methods. This analytical prowess not only facilitates deeper understanding but also empowers researchers to make more informed decisions. Moreover, AI augments research methodologies by offering innovative techniques for optimizing research questions and methodologies. By harnessing AI's analytical capabilities, researchers unlock new avenues of understanding, leading to more comprehensive and nuanced studies. The realm of AI-driven skill enhancement for researchers is addressed by illustrating the process in the context of a study that seeks to gain a deeper understanding of the strategies principals employ to develop teachers working with English learners. This collaborative approach enriches individual research endeavors and contributes to the collective advancement of research methodologies within the educational landscape. We highlight the transformative potential of AI in revolutionizing educational research practices and enhancing outcomes for English learners in the K-12 education system. By leveraging AI, researchers can improve their interviewing techniques, refine performance, and foster a culture of continuous improvement. AI-powered tools provide real-time feedback and facilitate iterative refinement of practices. This collaborative approach can enrich individual research endeavors and contribute to the collective advancement of research methodologies within the educational landscape.</p> Belinda Gimbert Dustin Miller Raeal Moore Dean Cristol Nick Giester Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 143 149 10.34190/icair.4.1.3020 Citizens’ Right to Information and the Principle of Good Administration: What’s new in the AI Act? https://papers.academic-conferences.org/index.php/icair/article/view/3223 <p>The EU Artificial Intelligence (AI) Act has sparked debates about establishing new or expanded information rights for citizens affected by AI-based administrative decision-making (AADM). While some argue that new or expanded information rights to citizens are reasonable and ethically recommendable, others caution that such rights may inhibit innovation. However, as with any public debate, arguments should be grounded in a clear understanding of existing rights enshrined in current regulations. This article conducts a legal dogmatic analysis of the right to good administration in the EU Charter of Fundamental Rights (EU Charter) and its constituent information rights, namely the right to consultation, the right to access, and the right to an explanation, to provide a foundation for future debates on new or expanded information rights for citizens affected by AADM. Moreover, this article explores how the transparency provision in Article 13 of the AI Act interacts with these information rights and discusses whether these regulations collectively form a robust legal framework for citizens affected by AADM. Our approach to selecting and examining legal sources mirrors that of the Court of Justice of the European Union, as the Court holds the interpretation privilege of the examined provisions (dogmatic method within legal realism).</p> Anna Murphy Høgenhaug Hanne Marie Motzfeldt Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 150 156 10.34190/icair.4.1.3223 Artificial Intelligence and Neural Style Transfer in the Context of Art and Design: Ethical and Anticipated Ethical Issues https://papers.academic-conferences.org/index.php/icair/article/view/3185 <p>From the shape of our cellphones, the colorful packaging on our foods, to the material in our clothing, every object around us typically has some kind of element of design associated with it. Design is concerned with how users interact with the objects around them. This analysis will be concerned with identifying how AI is being applied in three main categories of design: functional design, visceral design, and behavioral design. Functional designs prioritize the function of objects over form. Visceral designs are concerned with issues of the pure aesthetics of objects. Behavioral designs influence users to act based upon the design of an object, whether it pertains to purchasing the item or using the item in a preferred way. In this analysis, an overlap of these categories will be analyzed through the lens of traditional paintings. A painting reflects a story told by an artist which allows for a variety of interpretations by the perceivers of the artwork. However, what happens when Artificial Intelligence (AI) is used in conjunction with painting? AI when applied to painting uses art-related generative algorithms, and neural networks, which are adapted from models for processing data. AI relies on this type of model to complete in the case of painting, the use of a Neural Style Transfer (NST) to compose a new object of art while employing the style of another artist. Through the lenses of generative AI’s current application and implications related to its future use, this analysis will provide an extensive overview of the convergence of technology, art, and design. This discussion will also address potential ethical and future ethical concerns about authorship, originality, the value of AI-generated art, and the impact on traditional practices of Art and Design from the perspective of painting.&nbsp;As AI technology related to the creation of art continues to develop, anticipatory ethics will attempt to identify ethical issues with this continued development of generative AI in the area of art.<br><br></p> Richard Wilson Eunice Hong Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 157 164 10.34190/icair.4.1.3185 Play my thesis https://papers.academic-conferences.org/index.php/icair/article/view/3130 <p style="font-weight: 400;">The field of Artificial Intelligence (AI) has gained increased attention since the release of ChatGPT in late 2022. Following the popularity and wide application, several generative AI (GAI or GenAI) tools have been released, with capabilities of generating novel content such as text, images, audio, and video. Previous research has noted both opportunities and limitations with GAI for various fields. One field with high potential impact is the game industry and the subfield of serious games, where the purpose of the game extends that of pure entertainment. GAI could play an important role for design and development of serious games, where design teams are typically smaller (compared with big commercial games), and competences in game mechanics, graphics and other game resources can be limited. This paper explores the opportunities and limitations of GAI tools to support development of serious games. This is done through the development of a serious game, based on the author’s PhD thesis, where various GAI tools are used to generate game content such as in-game dialogue, graphics, and audio. Development of the game artefact, which is named ‘Computer Programming in Schools’, follows the process of a design science research (DSR) project where emphasis is placed on the design in one iteration of development. <span style="font-weight: 400;">The author’s experiences of utilizing GAI tools during the development of the game were recorded in a researcher diary, together with screenshots and images from the game. In this paper, these experiences are discussed and compared to related research in order to seek the most salient opportunities and limitations for GAI to support design and development of serious games. The paper provides several hands-on examples of design and development of a serious game with GAI tools and concludes with a set of recommendations for the future use of GAI in game development.</span></p> Niklas Humble Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 165 174 10.34190/icair.4.1.3130 Generative Artificial Intelligence and the Impact on Sustainability https://papers.academic-conferences.org/index.php/icair/article/view/3024 <p style="font-weight: 400;">An increasingly popular subcategory of Artificial Intelligence (AI) is Generative AI (GAI), which encompasses technologies capable of creating new content, such as images, text, and music, often resembling outputs made by humans. The potential impact by GAI on sustainability is multifaceted. On the positive side, generative AI can aid in optimizing processes, developing innovative solutions, and identifying patterns in large datasets related to sustainability. This can lead to more efficient resource management, reduced energy consumption, and the creation of more sustainable products. However, there are also potential negative impacts, such as increased energy consumption associated with training and running generative AI models, as well as the potential for unintended consequences or biases in the generated content. Additionally, overreliance on generative AI may lead to reduced human oversight, which could undermine holistic, interdisciplinary, and collaborative approaches to sustainability. The aim of this paper is to explore the potential impacts on sustainability by generative artificial intelligence through a review of prior research on the topic.</p> <p style="font-weight: 400;">The study was conducted with a scoping literature review approach to identify potential impacts by generative AI on sustainability. Data were collected through a search in the database Scopus during the spring semester of 2024. Keywords, relevant for the study, were combined with Boolean operators. Papers identified through the search underwent a manual screening process by the authors, in which papers were selected for inclusion or exclusion in the study based on a set of criteria. Included paper were then analyzed with thematic analysis, according to the guidelines by Braun and Clarke. A categorization matrix, based in prior research on sustainability, supported the analysis and deductive coding of collected data. Results of the study highlight generative AI’s potential impact on sustainability that relate to both environmental aspects, economic aspects, and social aspects of sustainability. These different aspects of sustainability impact make this research an important contribution for deepening the understanding of generative AI and its potential consequences for society. Findings of the study provide theoretical contribution, implications for practice, and recommendations for future research on generative AI and sustainability.</p> Niklas Humble Peter Mozelius Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 175 182 10.34190/icair.4.1.3024 Unleashing Human Potential: A Framework for Augmenting Co-Creation with Generative AI https://papers.academic-conferences.org/index.php/icair/article/view/3074 <p>This paper redefines the traditional view of automation as a threat to human labor, advocating for human-AI co-creation as a strategic imperative for organizations. We propose a comprehensive six-stage framework for co-creation, emphasizing iterative feedback loops and continuous improvement to integrate generative AI into workflows effectively. Drawing from a multi-disciplinary perspective, we explore critical enablers of successful human-AI collaboration, including user-centered interface design, explainable AI systems, and fostering a culture of trust and experimentation. Real-world case studies, such as AI-enhanced visual design and creative writing, illustrate the transformative potential of co-creation across various sectors. We also propose a multifaceted measurement framework encompassing quantitative metrics (e.g., productivity gains, time-to-market acceleration) and qualitative indicators (e.g., employee well-being, skill development) to assess the impact of co-creation comprehensively. This research offers a strategic roadmap for organizations to embrace generative AI as a tool for collaboration and augmentation, thereby unlocking new levels of creativity, productivity, and employee empowerment.</p> Mitt Kabir Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 183 193 10.34190/icair.4.1.3074 Artificial Intelligence in Banking : The Evidence from Poland https://papers.academic-conferences.org/index.php/icair/article/view/3197 <p>The widespread use of new technologies has reduced the precise boundary between physical and digital realities. Dynamic development of artificial intelligence (AI) systems has contributed to the digital transformation of the global economy. Given its extensive range of potential applications, AI has the potential to impact a multitude of socio-economic domains, including politics, security, health care, medicine, economy, trade, finance, taxes, and production. As the banking sector plays a crucial role in the global economy, the question arises as to whether and to what extent banking market entities are willing to use artificial intelligence solutions in their business processes. This study aims to determine the scope of AI use by key banks operating in the Polish banking market. Achieving this goal requires determining what categories of AI are used by key banks operating in Poland, analysing AI methods used by those banks, and the fields of AI implementation. The research scope covers the analysis of AI applications in the largest banks operating in Poland, which together cover almost half of the market share (PKO Bank Polski S.A., Bank Pekao S.A., Santander Bank Polska S.A., mBank S.A. and ING Bank Śląski S.A). To obtain an up-to-date overview of AI usage, the research period is 2019-2024. The empirical part of the research is supported by an analysis of the widespread use of artificial intelligence and its dynamic international development based on academic papers and industry reports.</p> Monika Klimontowicz Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 194 202 10.34190/icair.4.1.3197 Artificial Intelligence (AI) in the Dentist's Work: The Perspective of Anthropomorphization and Cognitive Aspects of the Decision-Making Process https://papers.academic-conferences.org/index.php/icair/article/view/3160 <p>This research aims to evaluate the performance of an artificial intelligence (AI) tool called CranioCatch, which assists dentists by examining the anthropomorphization and cognitive aspects of the decision-making process. It remains unclear to what extent technology replaces or complements the work of healthcare professionals. Additionally, the specific cognitive aspects of the decision-making process attributed to CranioCatch, characterizing the anthropomorphization of the technology, have not been mapped. The methodology employed included "instruction to the double," "shadowing," and interviews. The study reveals how CranioCatch replicates cognitive activities performed by dentists by comparing, correlating, and associating images from a specific patient with similar images in a database. Through these steps (comparison, correlation, and association), which constitute the identification of the problem, it is possible to develop alternatives and select the best solution for the patient's situation. Thus, AI contributes to both the identification and action aspects of the decision-making process. From a physical perspective of anthropomorphization, the primary characteristic replicated by AI was the sense of vision. Regarding contributions to dentistry, the AI technology studied enhances the efficiency of professionals, notably reducing evaluation and diagnosis times.</p> Guilherme Knupp Muniz Luciana Paula Reis Sérgio Evangelista Silva June Marques Fernandes Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 203 209 10.34190/icair.4.1.3160 AI in the Learning Environment: Examination of Pedagogical, Psychological, and Ethical Implications https://papers.academic-conferences.org/index.php/icair/article/view/3019 <p>This paper provides an overview of how AI tools and capabilities are being integrated in teaching and learning. It also addresses ethical aspects of integrating AI and other advanced technologies in these processes as well as the resulting implications. These issues include algorithmic fairness, privacy and surveillance, intellectual property rights, misinformation and violence, health concerns, and social implications. In addition, how access to such tools will impact the cognitive functions of their users, particularly learners, has become a relevant question. Much more information is now becoming easily accessible to a wider population, making it important to understand and mitigate AI ramifications to human learning, reasoning, and perception, especially for young learners whose brains are still in the early developmental stages. To that end, this paper also reviews the current literature regarding the impact of AI integration in education, focusing on the potential effects this can have on the cognitive abilities and functions of learners. Thus, with AI making its way into the educational realm, from pre-college to advanced studies levels, understanding the resulting pedagogical, psychological, and ethical ramifications can guide thoughtful development and appropriate integration of AI tools in support of the learning environment.</p> Gala Krsmanovic Fadi P. Deek Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 210 217 10.34190/icair.4.1.3019 What The Phish! Effects of AI on Phishing Attacks and Defense https://papers.academic-conferences.org/index.php/icair/article/view/3224 <p>The rapid advancement of artificial intelligence (AI) has significantly transformed the landscape of phishing attacks, presenting new challenges for detection and defense. AI-generated phishing emails, which leverage machine learning and natural language processing (NLP), have become increasingly sophisticated, making traditional detection methods ineffective. This research analyzes the evolution and impact of AI-driven phishing attacks, comparing the distinguishing linguistic and contextual patterns of AI-generated versus human-generated phishing emails. The study utilizes a comprehensive dataset, insights from informal discussions with Chief Information Security Officers (CISOs), and an analysis of historical phishing incidents before and after the release of advanced generative models like ChatGPT. Findings reveal that AI-generated phishing emails exhibit higher success rates due to their ability to bypass conventional spam filters and mimic human communication styles. Additionally, the research identifies significant gaps in current defense strategies and recommends a multi-layered security framework that integrates AI-specific detection tools, real-time threat intelligence, and machine learning-based anomaly detection to mitigate these evolving threats. This study emphasizes the need for organizations to proactively adapt to the growing sophistication of AI-powered phishing by implementing advanced defenses that are capable of keeping pace with the rapidly changing cyber threat landscape.</p> Shreyas Kumar Anisha Menezes Sarthak Giri Srujan Kotikela Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 218 226 10.34190/icair.4.1.3224 Governance Considerations of Adversarial Attacks on AI Systems https://papers.academic-conferences.org/index.php/icair/article/view/3194 <p>Artificial intelligence (AI) is increasingly integrated into various aspects of daily life, but its susceptibility to adversarial attacks poses significant governance challenges. This paper explores the nature of these attacks, where malicious actors manipulate input data to deceive AI algorithms and their profound implications for individuals and society. Adversarial attacks can undermine critical AI applications, such as facial recognition and natural language processing, leading to privacy violations, biased outcomes, and eroding public trust. The discussion emphasizes understanding the threat vectors associated with adversarial attacks and their potential repercussions. It advocates for robust governance frameworks encompassing risk management, oversight, and legislative measures to protect AI systems. Such frameworks should prioritize AI technologies' confidentiality, integrity, and availability (CIA) while ensuring compliance with ethical standards. Furthermore, the paper examines various strategies for mitigating risks associated with adversarial attacks, including training and continuous monitoring of AI systems. It highlights the importance of accountability among developers and researchers in implementing preventive measures that align with principles of transparency and fairness. Organizations can enhance security and foster public trust by integrating legislative frameworks into AI development standards. As AI technologies evolve, continuous review of governance practices is essential to address emerging threats effectively. This paper ultimately focuses on the critical role of comprehensive governance in safeguarding AI systems against adversarial attacks, ensuring that technological advancements benefit society while minimizing risks.</p> Nombulelo Lekota Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 227 233 10.34190/icair.4.1.3194 Mapping Software-Engineering Industry AI Use to Software-Engineering Curriculum: Developing the AI-USE Framework https://papers.academic-conferences.org/index.php/icair/article/view/3034 <p class="p1">Estimates predict a global deficit of 4 million software engineers by 2025, further complicated by the software engineering (SE) industry's escalating use of artificial intelligence (AI). To tackle this issue, our research suggests that computer science (CS) curricula in middle and high schools need to be updated to incorporate SE industry segments that significantly employ AI. This strategic curriculum alignment is significant for preparing a workforce equipped to meet future industry demands. Our initial analysis involved reviewing nine international AI education guidelines to evaluate current methods for integrating AI into SE education. The findings indicated a pronounced lack of specific guidance connecting AI applications in the SE industry with educational content. To address this, we performed a systematic literature review of 12 research papers focusing on AI's role across the SE industry, followed by multiple rounds of inductive content analysis. An industry segment was deemed "essential" if it was referenced in 75% or more of the papers' findings.</p> <p class="p1">Through this method, we identified 10 essential SE industry segments for inclusion in CS education: software development, software maintenance, process improvement, software economics, knowledge management, project management, software testing, software security, quality assurance, and deployment and operations (DevOps). These findings led to the creation of the AI-USE (Artificial Intelligence Usage in Software Engineering) framework, which maps these 10 key segments to the predominant uses of SE in the industry as identified in the literature. Further inductive content analysis helped us develop subsegments for these essential areas. Ongoing framework development involves refining these subsegments and gathering feedback from industry and academic professionals. We anticipate that the fully developed AI-USE framework will significantly enhance SE education, equipping the next generation of software engineers with the AI proficiency required to address the industry’s evolving demands.</p> Addison Lilholt Thomas Heverin Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 234 242 10.34190/icair.4.1.3034 Empowering Diversity by Building Inclusive Software Engineering Projects with Large Language Models https://papers.academic-conferences.org/index.php/icair/article/view/3023 <p>The integration of emerging AI technologies, particularly Large Language Models (LLMs), is fundamentally reshaping the landscape of software engineering. LLMs offer a wide array of capabilities that can enhance various aspects of software development, including design assistance, automated code analysis and synthesis, and testing. Consequently, their integration into software engineering practices holds the potential to greatly improve efficiency and code quality, marking a notable paradigm shift towards the creation of more intelligent and adaptive systems. This allows the definition of an extended perspective when establishing requirements for building software engineering solutions by incorporating evidence from literature resources and practice. By synthesizing requirements and evidence from scholarly and practitioner sources using LLMs, software engineers can ensure that their solutions are not only technically sound but also align with best practices in fostering social norms and values like inclusivity. Adopting such an approach supports the creation of responsible and inclusive educational software that caters to diverse learning needs and promotes equitable access to educational resources. Moreover, by using LLMs to inform decision-making throughout the software development lifecycle, software engineers can iteratively refine and enhance their solutions based on emerging research findings, thereby ensuring continuous improvement in fostering inclusive educational environments. Hence, this research aims to develop a novel evidence-based software engineering method informed by insights from scientific literature. As a use case, we design and implement a dyslexia-oriented educational software application that supports children in learning to read, guided by this new methodological approach.</p> Clara Maathuis Greg Alpar Stefano Bromuri Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 243 251 10.34190/icair.4.1.3023 Navigating Online Narratives on Israel-Hamas War with LLMs https://papers.academic-conferences.org/index.php/icair/article/view/2868 <p>In recent years, international conflicts have seen a steady rise, accompanied by the increased use of technological advancements to achieve strategic and military objectives. One such conflict, the ongoing hostilities between Israel and Hamas, exemplifies the complex interplay of political, ideological, social, and technological factors shaping contemporary warfare. While extensive analysis is devoted to analysing strategic, historical, and economic dimensions, still limited attention is directed through research efforts towards the vast troves of information available on global social media platforms such as YouTube. These platforms serve as conduits for a wide range of narratives, opinions, and representations related to this war, by providing valuable insights into public discourse and sentiments. Given the capability of automatically and systematically analysing vast amount of content, this research tackles the above stated knowledge gap by developing a novel explorative LLMs (Large Language Models)-based modelling approach to efficiently extract, categorize, and interpret nuanced discussions surrounding the Israel-Palestinian war. It does that following the Data Science methodological approach and focusing on YouTube data aiming at not only contributing to enriching understanding of contemporary conflicts through social media platforms, but also to reflect on the evolving role that state-of-the-art AI technological developments play in shaping the global security landscape.</p> Clara Maathuis Iddo Kerkhof Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 252 259 10.34190/icair.4.1.2868 Unveiling the Emotional Landscape of Ukrainian War Narratives using Large Language Models https://papers.academic-conferences.org/index.php/icair/article/view/3060 <p>Social media platforms serve as a dynamic environment where narratives, sentiments, and ideologies intermingle, catalysing the amplification and dissemination of ideas and experiences. As the ongoing endurance war in Ukraine represents a significant inflection point in the contemporary geopolitical landscape, revealing the urgency for comprehensive understanding and analysis of the implications and consequences of the activities therein. While traditional media outlets as we as research and practitioner efforts tackle various social, economic, and political dimensions of this conflict, efforts directed to capturing and reflecting on the nuanced thoughts and emotions of Ukrainian people are still incipient in relation to unconventional social media platforms like TikTok and Telegram. To address this gap, this research develops a novel LLMs (Large Language Models)-based modelling solution that captures, structures, and analyses the discourses characterizing the sentiments and emotions of Ukrainian Telegram users. This is done pursuing a Data Science methodological approach focusing on the insights collected in the first year of war, thus between February 24, 2022, up to February 23, 2023. By harnessing the analytical capabilities of LLMs, this research aims to bridge the gap between conventional understanding and the nuances of human sentiments, thereby advancing comprehension of the multifaceted dimensions of the ongoing war in Ukraine.</p> Clara Maathuis Iddo Kerkhof Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 260 270 10.34190/icair.4.1.3060 AI, Personalized Education, and Challenges https://papers.academic-conferences.org/index.php/icair/article/view/3133 <p>Artificial Intelligence (AI) is gaining traction in education, with potential applications that range from personalized learning to automated administrative tasks. However, the integration of AI into educational systems is not without its challenges. This paper explores both the opportunities and obstacles that AI presents in the field of education, particularly through the use of Intelligent Tutoring Systems (ITS) and Adaptive Learning Management Systems (ALMS). These technologies aim to tailor learning experiences to individual students by analyzing data and adjusting content to suit their needs. While this personalized approach could enhance student engagement and comprehension, it relies on vast amounts of data, raising concerns about privacy and the potential misuse of personal information. Moreover, AI’s impact on education extends to supporting educators by providing insights into student performance and automating routine tasks. However, the effectiveness of AI systems in this regard remains questionable, particularly when considering the limitations in current AI models and the challenges of integrating them into existing educational frameworks. The risk of algorithmic bias is also a critical issue, as AI systems can inadvertently reinforce inequalities present in the data they are trained on, leading to unfair or discriminatory outcomes. Additionally, while AI promises to streamline certain aspects of education, there are concerns that over-reliance on technology could depersonalize the learning process. Human educators play a crucial role in fostering not only intellectual growth but also emotional and social development—elements that AI systems are currently unable to replicate. This paper argues that while AI holds significant promise in education, its deployment must be carefully managed, with attention to ethical considerations, equity in access, and the preservation of the human elements essential to effective learning. Addressing these challenges is key to ensuring that AI contributes meaningfully to education rather than exacerbating existing issues.</p> Aziz Mimoudi Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 271 280 10.34190/icair.4.1.3133 From Data to Decisions: Leveraging AI for Proactive Education Strategies https://papers.academic-conferences.org/index.php/icair/article/view/3082 <p>The advancement of Artificial Intelligence (AI) and Large Language Models (LLMs) ushers in a new era in education, characterized by more adaptive, personalized learning experiences. This literature review examines the profound impact of these technologies on student engagement, achievement, and personalized learning within higher education institutions. Through a systematic analysis of scholarly articles from 2022 to 2024, this review explores how AI is reshaping educational practices through enhanced feedback mechanisms, predictive analytics, and innovative teaching methodologies. The findings indicate that AI significantly improves student support services by enabling early identification of at-risk students and by facilitating tailored educational interventions. Moreover, the deployment of chatbots and LLMs, such as GPT (generative pre-trained transformer) and BERT (bidirectional encoder representations from transformers), offers promising enhancements in instructional strategies and student assessments, fostering richer, interactive learning environments. However, the integration of these technologies also introduces ethical challenges, necessitating consideration of issues such as data privacy and bias. The review emphasizes the need for ethical frameworks and responsible AI usage to ensure technology enhances educational outcomes without compromising fairness or integrity. Future research directions are suggested, focusing on broader AI applications across various educational settings and the need for longitudinal studies to assess the long-term effects of AI integration in education.</p> Willie Moore Li-Shiang Tsay Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 281 288 10.34190/icair.4.1.3082 Generative AI and its Impact on Activities and Assessment in Higher Education: Some Recommendations from Master's Students https://papers.academic-conferences.org/index.php/icair/article/view/3025 <p>The rapid development of generative AI (GenAI) raises new questions in higher education such as: What should be the university policy regarding GenAI? How ought courses be redesigned for fair and resilient assessment? What the added pedagogical and didactical values when involving GenAI in teaching and learning activities? Different universities have rapidly created and presented contradictory standpoints and draft policies, and teachers show different opinions regarding the pros and cons of GenAI. This study has been carried out with a student perspective, where 16 students have been examining their own Master's programme on sustainable information provision. Students have assessed the assessment in their previous courses in the Master's programme. The aim of the study is to investigate how sustainable course activities and assignment are, and to explore how GenAI tools might support and facilitate teaching and learning activities. Moreover, the students were given the task to test detection software on GenAI generated solutions to assignments in chosen Master's courses. Students conducted these tasks as a part of a 7.5 ECTS project course in the same Master's programme as the investigated courses are a part of. For inspiration and for background information on artificial intelligence to the project work students participated in the first Symposium on AI Opportunities and Challenges (SAIOC) in December 2023. Data have been gathered from reports of 3 group projects where 16 students have investigated 5 freely chosen courses in the programme in each group work. Beside from testing GenAI tools in existing activities and assignments students also interviewed the subject matter experts that are responsible for the chosen courses. Results were firstly analysed and presented in group reports, combined with 16 individual reflection essays. Regarding the individual essays students were instructed to bring up ethical perspectives on GenAI in higher education, and also to present and discuss suggestions for how the current course design and assignments better could be redesigned for improved sustainability and fairness. Finally, all the group reports and the individual reflection essays were thematically analysed by the author, who also is the subject matter expert and main teacher for the project course. </p> <p>Findings show that many of the existing assignments in the Master's programme could be partly solved with different GenAI tools. The AI generated solutions showed different levels of quality and correctness for different types of activities and assignments. An ethical concern that many student essays brought up was the relatively poor quality of the tested detection software. A question in one of the essays was if teachers should use detection software with an accuracy rate just above 50% to evaluate student submissions. The recommendations from both the students and the author are to provide clear instructions about when GenAI is allowed and not in course activities, and to redesign the course structure for continuous assessment. With or without GenAI tools, a continuous assessment where the whole study path through a course is assessed, and not only isolated submissions, would strengthen fairness and sustainability. Finally, several students suggest oral examinations as a complement to the existing assessment methods, even if their findings showed that GenAI tools can be used to prepare oral presentations. </p> Peter Mozelius Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 289 295 10.34190/icair.4.1.3025 Educating the Educators on Generative Artificial Intelligence in Higher Education https://papers.academic-conferences.org/index.php/icair/article/view/3026 <p>In the current spring of Artificial Intelligence, the rapid development of Generative AI (GenAI) has initiated vivid discussions in higher education. Opportunities as well as challenges have been identified and to cope with this new situation there is a need for a large-scale teacher professional development. With basic skills about GenAI teachers could use the new technology as an extension of the existing technology enhanced teaching and learning. The aim of this paper is to present and discuss the project FAITH (Frontline Application of AI and Technology-enhanced Learning for Transforming Higher Education). FAITH is a higher education pedagogical development initiative for institutional development for teachers with good fundamental skills in traditional pedagogy. A project with the overall objective of increasing the staff understanding of AI and to develop new competencies in the field of GenAI and technology enhanced learning. The research question that guided this study was: "What are the perceived opportunities, challenges and expectations of involving GenAI in higher education?" The overall research strategy for the FAITH project is design-based research, which involves iterative and cumulative development processes. In the early iteration that this study was a part of has been carried out inspired by Collective Autoethnography where members of the steering group behind the FAITH project, and members of the project team have constituted the main focus group. Data were collected by structured interviews where two GenAI tools also have been interviewed. Findings show that the expectations are high, but that the FAITH ambition of institutional development is depending on teachers’ motivation for taking an active part in the project. Another challenge could be that many teachers see GenAI as something that threatens the current course design, and that a general ban of GenAI is the appropriate solution. One of, several identified opportunities, is that a general revision of syllabi and assessment in an adaptation for GenAI enhanced learning would improve the current course design. </p> Peter Mozelius Marcia Håkansson Lindqvist Cleveland-Innes Martha Jimmy Jaldemark Marcus Sundgren Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 296 302 10.34190/icair.4.1.3026 Transcending Fixed Meanings: Exploring the Impact of Linguistic Relativism on Adaptive Language Models in Generative AI https://papers.academic-conferences.org/index.php/icair/article/view/3166 <p>This research paper aims to start a discourse exploring the impact of linguistic relativism on adaptive language models within the field of generative AI, challenging the traditional fixed-meaning approach to natural language processing (NLP). It argues for a shift towards more personalised AI systems that can adapt to individual users' language nuances, rather than relying solely on large datasets with predetermined meanings. The current NLP models, based on conventional semantics, assume that language has a stable, objective reality where words have universally accepted meanings. This approach limits AI's ability to understand and generate language that reflects personal or contextual variations. The paper debates that generative AI should move towards a model that embraces the fluidity and subjectivity of language, where meanings are not fixed but can change depending on the speaker's intent or the situational context. This would involve incorporating user-specific data and situational awareness into AI systems, enabling them to interpret not just the literal meanings of words but also the speaker's intentions and the circumstantial cues that may alter these meanings. Such an approach would lead to the development of AI systems that are more adaptive and sensitive to the nuances of personal expression and contextual interpretation. However, the paper also acknowledges the potential ethical challenges associated with this approach. If AI systems are designed to allow for fluid and personalized meanings, they could be manipulated to shape public discourse in ways that reflect the biases or intentions of their developers. This raises concerns about the potential misuse of AI in influencing perceptions and realities, particularly when the fluidity of language is taken to an extreme where communication becomes chaotic and ineffective. Ultimately, suggesting personalised language models offer significant potential for enhancing AI's ability to understand and generate human-like language, there is a need for a balance between individual linguistic creativity and the communal aspects of language that ensure effective communication. The paper concludes that integrating linguistic relativism into AI models could advance the theoretical understanding of language in AI, but it must be approached with caution to avoid undermining the stability and clarity essential for meaningful human interaction.</p> Subhangi Namburi Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 303 312 10.34190/icair.4.1.3166 GenAI Acceptance in Professional Services: The Case of Management Consulting https://papers.academic-conferences.org/index.php/icair/article/view/3021 <p>The increasing relevance of Generative Artificial Intelligence (GenAI) in professional services, particularly in management consulting, and its impact on client services and operations raises the question: <em>“What factors influence consultants’ acceptance of GenAI?”</em> This study explores the intricate factors influencing the acceptance of GenAI, specifically focusing on management consulting. Using an adapted version of the Unified Theory of Acceptance and Use of Technology (UTAUT) as the theoretical framework, a mixed-methods approach was employed: Twenty semi-structured interviews and a quantitative survey of 140 consultants reveal insights into consultants’ perceptions and interactions with GenAI. The findings indicate the relevance of performance and effort expectations, social influence, facilitating conditions, and concerns about GenAI’s trustworthiness. Highlighting the complexity of human-technology dynamics, some consultants view GenAI as an opportunity to gain a competitive advantage in their career progression, while others report feeling ashamed when disclosing their use of it. This study broadens the scope of technology acceptance research, introduces specific adaptations and extensions of the theory to better fit the GenAI context, and provides practical managerial recommendations.</p> Dennis Nesemeier Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 313 321 10.34190/icair.4.1.3021 Ecosystem Theory and the Adoption of Artificial Intelligence in SMEs https://papers.academic-conferences.org/index.php/icair/article/view/3137 <p style="font-weight: 400;">Given the various risks involved in incorporating artificial intelligence (AI) and machine learning into their business operations, firms are at an inflection point about how to do so. In this paper, we propose that for small medium-sized enterprises (SMEs), which lack capital, large quantities of data, and expertise, the best solution would be to join a pre-existing (or developing) ecosystem. From the two potential alternatives available to an SME, going it alone or depending on a larger corporation, we argue for a third option, the joining of an ecosystem of organizations that use AI systems in their operations, as the golden mean. We conclude with some practical and theoretical implications.</p> Steve Nolan Stelios Zyglidopoulos Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 322 328 10.34190/icair.4.1.3137 Security and safety concerns in the age of AI https://papers.academic-conferences.org/index.php/icair/article/view/3142 <p>Artificial Intelligence (AI) is transforming industries at an astonishing rate, reshaping how we live, work, and interact with technology. Yet, as AI becomes more pervasive, it brings urgent questions about security and safety. This article explores these critical issues, drawing a clear distinction between AI security and AI safety—two concepts that are often misunderstood but are crucial for responsible AI deployment. AI security focuses on protecting systems from external threats like data breaches, adversarial attacks, and unauthorized access. As AI systems increasingly handle sensitive data and control critical operations, securing them against such risks is essential. A breach or failure could compromise not only privacy but also the integrity of critical infrastructures. On the other hand, AI safety extends beyond technical defenses to the broader societal implications of AI. Issues like algorithmic bias, ethical decision-making, and unintended consequences of AI systems highlight the risks to human well-being. As AI becomes more autonomous, its alignment with human values and societal norms becomes paramount. Furthermore, the existential risks posed by advanced AI—such as loss of control or unintended outcomes—raise profound questions about the future of human-AI coexistence. This article delves into real-world case studies of AI failures and near-misses, offering tangible insights into the potential consequences of unchecked AI growth. It also explores strategies for mitigating these risks, balancing the pursuit of innovation with the need for transparency, accountability, and ethical oversight. As we look to the future, international cooperation and robust regulatory frameworks are essential to managing AI’s growing influence. By examining both technical and ethical dimensions, this article equips readers with a comprehensive understanding of AI security and safety, urging a proactive approach to managing the risks and harnessing the potential of this powerful technology.</p> Victoria Yousra Ourzik Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 329 337 10.34190/icair.4.1.3142 Ethical Generative AI: What Kind of AI Results are Desired by Society? https://papers.academic-conferences.org/index.php/icair/article/view/3061 <p>There are many publications talking about the biases to be found in in generative AI solutions like large language models (LLMs, e.g., Mistral) or text-to-image models (T2IMs, e.g., Stable Diffusion). However, there is merely any publication to be found that questions what kind of behavior is actually desired, not only by a couple of researchers, but by society in general. Most researchers in this area seem to think that there would be a common agreement, but political debate in other areas shows that this is seldom the case, even for a single country. Climate change, for example, is an empirically well-proven scientific fact, 197 countries (including Germany) have declared to do their best to limit global warming to a maximum of 1.5°C in the Paris Agreement, but still renowned German scientists are calling LLMs biased if they state that there is human-made climate change and humanity is doing not enough to stop it. This trend is especially visible in Western individualistic societies that favor personal well-being over common good. In this article, we are exploring different aspects of biases found in LLMs and T2IMs, highlight potential divergence in the perception of ethically desirable outputs and discuss potential solutions with their advantages and drawbacks from the perspective of society. The analysis is carried out in an interdisciplinary manner with the authors coming from as diverse backgrounds as business information systems, political sciences, and law. Our contribution brings new insights to this debate and sheds light on an important aspect of the discussion that is largely ignored up to now.</p> Marc Lehmann René Peinl Andreas Wagener Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 338 344 10.34190/icair.4.1.3061 AI in Education: Balancing Innovation and Responsibility https://papers.academic-conferences.org/index.php/icair/article/view/3158 <p>Artificial intelligence (AI) is changing education in many ways. This proposed chapter will explore how AI can influence education and the benefits it can provide as well as the potential drawbacks and how to balance them. It is an accepted fact that personalized learning can improve education.</p> <p>AI-powered platforms can analyze each student's needs and learning style, customizing content and activities accordingly. This personalized approach enables more engaging and effective instruction, helping students learn better and achieve higher academic performance. It automates the usual administrative tasks of grading, scheduling, keeping records to give educators more time to focus in supporting students.</p> <p>AI enables data analysis and insights, allowing educators to make data-driven decisions to improve teaching methods and student performance. Predictive analytics help identify trends and patterns, guiding targeted support and enhancing overall learning outcomes.&nbsp;&nbsp; It enables new teaching tools like virtual tutors, educational chat-bots, and interactive simulations, enhancing student engagement and understanding, ultimately contributing to a more effective learning experience. It also promotes accessibility and inclusion in education by providing tools for students with disabilities and addressing equity issues.</p> <p>Looking ahead, the integration of AI in education supports lifelong learning and career development, offering personalized recommendations for skills development and upskilling based on an individual's strengths, interests, and career goals.</p> <p>Using AI may reduce human connection and interaction, hindering the development of important interpersonal skills in students. Even advanced AI systems cannot fully replicate the emotional intelligence, empathy, and social skills that human teachers bring to the classroom.</p> <p>The large amounts of student data required for AI systems raise concerns about protecting privacy and ensuring data security. There are also worries that AI algorithms may perpetuate unfair biases present in the data used to train them, leading to unequal treatment or opportunities for certain groups of students.</p> <p>As AI continues to evolve and integrate into education, it is vital to resolve these challenges and ensure the responsible and ethical use of this technology. Educators and policymakers need to balance the benefits of Artificial intelligence by mitigating potential risks or unintended consequences.</p> <p>&nbsp;</p> Dr Sruthi Pillai R Ramakrishnan Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 345 354 Effective Continuous Quantitative Measures for End-to-End AI Guardrails https://papers.academic-conferences.org/index.php/icair/article/view/3067 <p>Large Language Models such as ChatGPT have brought cutting-edge AI systems into the cultural zeitgeist. As a result, AI is no longer an isolated fief of academia or forward-leaning businesses. There are more than 35 million visits to open-source models in public repositories monthly. Clearly, the general technology community has caught onto the power of such systems and is keen to harness the promise of efficiency, productivity, and enhanced capability. Concurrent to this uptrend, AI systems are understood to be potentially vulnerable to various ethical issues. Such issues range from bias and fairness, to explainability and trustworthiness. More than mere theory, such vulnerabilities have manifested in mainstream settings such as politics, medicine, and law. The ethical implementation and operation of AI systems is, therefore, of critical interest as the democratization of such systems gains accelerates. However, there is an ongoing challenge insofar as there is little consensus on what constitutes quantitative ethical and responsible AI guardrails. This leaves AI practitioners without sufficient guidance to implement systems reasonably free from societal level harm. Accordingly, this work presents a structured taxonomy and concept matrix consisting of 39 discrete guardrails arrayed across a three-phased AI system lifecycle. Measure families further organize measures in terms such as bias mitigation, adversarial robustness, and anomaly monitoring. Then, I provide specific quantitative metrics for each measure construct. The intended takeaway is for AI practitioners to have the means to select appropriate and effective metrics for assuring ethical and responsible guardrails.</p> Jason Pittman Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 355 363 10.34190/icair.4.1.3067 The Comparative Analysis of YOLOv5/v8/v9 for Object Detection, Tracking, and Human Action Recognition in Combat Sports https://papers.academic-conferences.org/index.php/icair/article/view/3031 <p>YOLO models are widely used object detectors in computer vision (CV). This study investigates the relative performance of YOLOv5, YOLOv8, and YOLOv9 for object detection, tracking, and human action recognition in combat sports. The models were evaluated using curated datasets encompassing various combat scenarios, athlete movements, and equipment configurations. Pre-processing protocols and augmentation techniques were applied to improve model accuracy and generalizability, including automated orientation correction, image dimension standardisation, contrast enhancement, and methods such as zoom, rotation, shear, and grayscale conversion. The key findings provide insight into the comparative performance of the models across various evaluation metrics, such as precision, recall, and mean average precision. Each model's ability to detect, track, and recognise human actions in dynamic combat sports environments is evaluated. Computational efficiency and real-time performance were assessed as these are important indicators for practical applications in coaching, training, and competitive scoring systems. The findings suggest that YOLOv8 offers the best balance of precision and recall, making it particularly suitable for real-time applications in combat sports analytics. This study contributes to advancing CV technologies in combat sports analytics, with potential implications for improving athletic training methods, facilitating personalised coaching interventions, and enhancing objectivity and consistency in competitive scoring processes in combat sports.</p> Evan Quinn Niall Corcoran Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 364 373 10.34190/icair.4.1.3031 AI Generative in Brazil’s Public Schools: The Teachers’ Perspective https://papers.academic-conferences.org/index.php/icair/article/view/3219 <p>Artificial Intelligence (AI) alters the societal landscape and transforms people's lives. In the realm of education, AI offers many opportunities and challenges in teaching and learning. While AI holds promise in enriching learning experiences and fostering motivation, its successful integration into educational practice necessitates that both students and teachers possess digital competence. Education plays a critical role in preparing future workforces. In South America, Brazil stands as the most populous country, with 212.6 million people. The utilisation of AI generative tools, such as ChatGPT, for teaching purposes in the country, is on the rise, albeit constrained within the educational context. Challenges faced in integrating AI generative tools in Brazil’s education system include the facilitation of plagiarism and teachers’ insufficient digital competence. In an effort to delve into the intricacies of teachers’ experiences and challenges regarding the integration of AI generative tools into their pedagogical practice, a qualitative study was conducted, and nine teachers in an upper secondary school in Brazil answered a questionnaire involving seven open questions. The teachers perceive the integration of AI generative tools into teaching practice as an instrument that enhances opportunities for improving the quality of teaching, stimulating student interest, and enriching the dynamism of the content learning process. Teachers have highlighted ChatGPT as a valuable tool for research and consultation. This tool can simplify teaching tasks and is typically well-received by students. Both teachers and students can utilise this tool to generate materials such as lyrics and textual variations and even produce fully composed songs ready for listening. ChatGPT is also used to prepare and correct assessments, ensuring consistency in evaluation. Teachers use AI to prepare texts, quizzes, and multiple-choice tests. AI-generated images are also used to enhance illustrations, making learning materials more engaging and visually appealing. Although AI simplifies complex subjects and makes the learning process more engaging, it is essential to provide adequate training on AI tools to enhance student involvement and educational results.</p> Jussara Reis-Andersson Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 374 380 10.34190/icair.4.1.3219 LLM Supply Chain Provenance: A Blockchain-based Approach https://papers.academic-conferences.org/index.php/icair/article/view/3128 <p>The burgeoning size and complexity of Large Language Models (LLMs) introduce significant challenges in ensuring data integrity. The proliferation of "deep fakes" and manipulated information raises concerns about the vulnerability of LLMs to misinformation. Traditional LLM architectures often lack robust mechanisms for tracking the origin and history of training data. This opaqueness can leave LLMs susceptible to manipulation by malicious actors who inject biased or inaccurate data. This research proposes a novel approach integrating Blockchain Technology (BCT) within the LLM data supply chain. With its core principle of a distributed and immutable ledger, BCT offers a compelling solution to address this challenge. By storing the LLM's data supply chain on a blockchain, we establish a verifiable record of data provenance. This allows for tracing the origin of each data point used to train the LLM, fostering greater transparency and trust in the model's outputs. This decentralised approach minimises the risk of single points of failure and manipulation. Additionally, the immutability of blockchain records ensures that the data provenance remains tamper-proof, further enhancing the trustworthiness of the LLM. Our approach leverages three critical features of BCT to strengthen LLM security: 1) Transaction Anonymity: While data provenance is recorded on the blockchain, identities of data contributors can be anonymised, protecting their privacy while ensuring data integrity. 2) Decentralised Repository: Enhances the system's resilience against potential attacks by distributing the data provenance record across the blockchain network. 3) Block Validation: Rigorous consensus mechanisms ensure the validity of each data point added to the LLM's data supply chain - minimising the risk of incorporating inaccurate or manipulated data into the training process. Using the experimental approach, initial evaluations using simulated LLM training data on a blockchain platform demonstrate the feasibility and effectiveness of the proposed approach in enhancing data integrity. This approach has far-reaching implications for ensuring the trustworthiness of LLMs in various applications.</p> Shridhar Singh Luke Vorster Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 390 397 10.34190/icair.4.1.3128 Generative AI and the End of Education https://papers.academic-conferences.org/index.php/icair/article/view/3155 <p>This paper reviews the prevalent rise of generative artificial intelligence&nbsp;and its impact on HE in the UK. In doing so, it echoes a central thought from Neil Postman's provocatively named book which reflects on the nature and purposes of education. Whilst GenAI may be approached with a constructivist disposition, much of the response in practice and policy is marked by technological determinism. A constructivist approach, however, allows us to (re)consider the nature and value of education. The paper therefore maintains the angle of the educator in asking: what are we trying to do in (higher) education? What is the purpose of Higher Education today? This is a cross-disciplinary question, much like Postman’s propositions were a cross-curricular reflection on the nature of schooling. Postman looked for a unifying narrative that can inspire ‘the ends of education’, or what education actually is and tries to accomplish, before considering its tools and approaches towards those goals. Those ends ensure that education does not become subjected to the false gods, such as economic utility and consumerism. These would spell the demise of any meaningful education, or ‘the end of education’. &nbsp;What appears foundational to these questions is a belief in the nature of human potential. In education, the cultivation of that potential is arguably a fundamental end, whichever way education is organised. Where the cultivation of that potential leads, however, can remain rooted in a humanist framework, it could become posthumanist, or it could be simply bewildering. With the means of generative AI, the analysis raises questions on epistemic threat versus intellectual success, and new horizons of creative possibility in human-computer interaction. The human potentiality of thinking, interpreting, criticality, and scepticism, come forward as retained elements of the humanist narrative. In the dawn of generative AI as a teaching, learning, and assessment instrument, the end of education remains human potential. Higher Education especially, remains the place to critically and ethically stimulate the human mind towards new horizons of knowing.</p> Stockman Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 390 397 10.34190/icair.4.1.3155 Can Robots Laugh? An Inquiry in the Nature of AI https://papers.academic-conferences.org/index.php/icair/article/view/3213 <p>In this paper we discuss the possibility of robots having a mind and being able to act like human beings and even surpass the human intelligence, and in consequence taking over the world. It is possibility that has been put forward in human history long ago, and that has been accentuated with the new advances in technology from the last few years, of which Chat GPT is the last very well-known example. We base ourselves in a literature review made on eight basic features we define as characteristic of humans, namely: Reproduction, Creation, Belonging, Citizenship, Self-Awareness, Mortality, Rationality, Humour, Feelings and Emotions. We use a plurality of databases as Google and SCOPUS. As a result, we conclude that even if robots may express themselves as humans, and may beat humans in specific activities, they lack most of the features that define human beings and most probably they will ever do. As with time and space travelling, robots that would take power on Earth are a utopia that will probably never happen, but whose pursue will be beneficial &nbsp;for the human race. The paper has the limitation of being only theoretical, and the originality of being based on Philosophy of Artificial Intelligence and presented in a scientific environment. &nbsp;&nbsp;</p> Eduardo Tomé Elizaveta gromova Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 398 404 10.34190/icair.4.1.3213 An Analysis of Knowledge Management Changes Through Artificial Intelligence with Probst's Model https://papers.academic-conferences.org/index.php/icair/article/view/3156 <p>In recent years, the integration of Artificial Intelligence (AI) into Knowledge Management (KM) has led to transformational changes. These changes have significantly enhanced traditional KM processes. To identify how AI technologies improve and reshape knowledge processes, this study conducted a systematic literature review. The review identified AI technologies suitable for each of Probst's building blocks, which outline the eight central KM processes. The research reveals a wide range of AI technologies, including machine learning, natural language processing and chatbots such as ChatGPT. These technologies can be applied in different domains and introduce innovative approaches to improve KM processes. Based on the AI technologies analysed, this study proposes a four-stage model to support the documentation and application of best practices and lessons learned. The model is designed to enhance the knowledge development process and aims to document and secure key project developments in the long term. A further objective was to analyse which KM process is most affected by chatbots. The findings indicate that chatbots have the potential to transform the use of knowledge in organisations. They act as facilitators by breaking down existing barriers, foster an open culture of knowledge sharing, streamline workflows and increase the accessibility of knowledge. The study also examines the broader changes that AI will bring to KM and forecasts the sixth generation of KM. It draws on Bencsik's (2021) evolutionary and revolutionary perspectives that specifically forecast this next generation. The study shows that AI not only enhances existing KM processes but also has the potential to fundamentally disrupt traditional methods and approaches. These findings underline the need for future research to explore the effective integration and scalability of AI technologies in real-world KM environments. This will help ensure that their long term impact and potential benefits are fully understood across different industries and organisational contexts.</p> Burak Toptas Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 405 414 10.34190/icair.4.1.3156 Artificial Intelligence and its Role in Legislative Practices https://papers.academic-conferences.org/index.php/icair/article/view/3134 <p>One of the most significant challenges faced by the average person is understanding the general context of the legislation that applies to them. The legal discipline has certain characteristics that make it almost esoteric for those who are not part of it, however, it is necessary to know the legal norms as comprehensively as possible. People today are no longer just "from a village/town/province/country", but have become something almost universal within the frameworks of globalization and the vast library the Internet offers at nearly zero cost.</p> <p>Artificial intelligence (AI) is a technology that is beginning to fundamentally change society, and within this "sea of transformations", the law – and legal and political practices in general – cannot avoid contact and change. There is no area of law not being altered, but in my opinion, the most significant place where transformations will be recorded is in legislating and drafting normative acts.</p> <p>Legislative operations are always complex and rarely bring satisfaction to those subject to regulation, given the relationship between the rights and obligations set out by normative acts. At the same time, it is challenging to legislate in an increasingly complex society, where enduring and situational interests intersect, where there are poor-quality mechanisms in legal documentation, and where the concept of legality is not always correctly perceived in the political environment. AI should offer a greater understanding of legal concepts to both ordinary citizens and legislators, precisely through its extensive library and its demonstrated synthesis and analysis capabilities.</p> <p>Therefore, the use of AI in the legislative process will become increasingly necessary as its capabilities grow, primarily to produce faster syntheses of legal documentation, essential for correctly understanding the context that justifies the need for new legislation. I believe that in the coming years, no country will escape this change, and AI will help eliminate some of the poor-quality practices that do not offer countries and citizens better prospects for wealth and professional development.</p> MARIUS VACARELU Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 415 422 10.34190/icair.4.1.3134 Artificial intelligence and the ethics of tomorrow https://papers.academic-conferences.org/index.php/icair/article/view/3017 <p>Traversing our digital information society safely and responsibly rests mainly on our comprehension of the vast sociotechnical nature of AI ethics risks, its implications and consequences. Ultimately, we all would prefer to live in a mature information society that is technologically just, inclusive and sophisticated, firmly rooted in ethical information philosophy and values. In this paper the findings of a scoping review of recent reported research look, in particular, at the sociotechnical changes and impact that disruptive AI innovation has on societies, and how this could impact new and futuristic nuances in AI ethics. The study delves into the interdisciplinarity of AI ethics. &nbsp;The role of intergovernmental collaboration in researching and availing&nbsp; frameworks and&nbsp; guardrails in upholding AI ethics is critically interrogated and explored. The study alludes to gaps in current research around AI ethics and impresses the need to deliberate on future AI ethics dimensions. The prerequisites for fostering further confidence and trust in AI technology are synthesised. The study concluded that inclusivity and justice in AI Ethics is not yet achieved on a global level, and that there is still a tendency towards cultural and other biases in designing, planning, implementing and also regulating AI. More research is needed on the impact and trends of AI innovation in the Global South compared to the Global North.</p> Brenda VAN WYK Marlene Holmner Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 423 431 10.34190/icair.5.1.3017 Comparing human-labeled and AI-labeled speech datasets for TTS https://papers.academic-conferences.org/index.php/icair/article/view/3030 <p>As the output quality of neural networks in the fields of automatic speech recognition (ASR) and text-to-speech (TTS) continues to improve, new opportunities are becoming available to train models in a weakly supervised fashion, thus minimizing the manual effort required to annotate new audio data for supervised training. While weak supervision has recently shown very promising results in the domain of ASR, speech synthesis has not yet been thoroughly investigated regarding this technique despite requiring the equivalent training dataset structure of aligned audio-transcript pairs. In this work, we compare the performance of TTS models trained using a well-curated and manually labeled training dataset to others trained on the same audio data with text labels generated using both grapheme- and phoneme-based ASR models. Phoneme-based approaches seem especially promising, since even for wrongly predicted phonemes, the resulting word is more likely to sound similar to the originally spoken word than for grapheme-based predictions. For evaluation and ranking, we generate synthesized audio outputs from all previously trained models using input texts sourced from a selection of speech recognition datasets covering a wide range of application domains. These synthesized outputs are subsequently fed into multiple state-of-the-art ASR models with their output text predictions being compared to the initial TTS model input texts. This comparison enables an objective assessment of the intelligibility of the audio outputs from all TTS models, by utilizing metrics like word error rate and character error rate. Our results not only show that models trained on data generated with weak supervision achieve comparable quality to models trained on manually labeled datasets, but can outperform the latter, even for small, well-curated speech datasets. These findings suggest that the future creation of labeled datasets for supervised training of TTS models may not require any manual annotation but can be fully automated.</p> Johannes Wirth René Peinl Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 432 438 10.34190/icair.5.1.3030 Challenges in AI Implementation: Perspectives from Practice and Research https://papers.academic-conferences.org/index.php/icair/article/view/3051 <p>Artificial Intelligence (AI) has become an inevitable topic for organizations across various sectors and sizes, offering promising applications as technological accessibility continues to expand. Despite its potential, practical implementation of AI-based Systems remains difficult with particular challenges tied to specific organizational contexts. Often companies invest heavily in AI development but encounter problems such as failure to achieve market readiness of the prototype or the systems struggling to deliver expected benefits. These setbacks often stem from flawed implementation strategies, excessive reliance on technology, or inadequate integration into existing organizational frameworks. Therefore, this paper addresses these challenges encountered at different phases of AI implementation projects. To this end, we initially conduct a Rapid Structured Literature Review (Armitage and Keeble-Allen, 2008), examining the literature on AI implementation cases and associated scholarly reviews. Extending the initial analysis, we experiment with AI driven document analysis as a means to integrate findings of a greater amount of publications into the review. The literature review is subsequently complemented by insights from our own consultancy experiences from the field of AI consultancy. The paper gives an overview of the most salient challenges in AI implementation projects and points out some approaches to mitigate those challenges. From a methodological standpoint it shows that AI driven reviews can yield similar results as conventional reviews, but may lack some explanatory depth. We find that a combination of manual and automated approaches tends to be the most effective strategy.</p> Clemens Kerschbaum Raphael Dachs Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 483 490 10.34190/icair.5.1.3051 Driving the Future: AI in Transportation https://papers.academic-conferences.org/index.php/icair/article/view/3210 <p><span style="font-weight: 400;">Transportation lies at the heart of our society, shaping nearly every aspect of our lives. Through its most fundamental function—mobility—transportation upholds the interconnected nature of our world. Economically, transportation controls the distribution of products, enabling mass production and global trade. It also expands personal mobility, expanding access to jobs and fostering communal connectivity. Culturally and socially, transportation diffuses ideas, values, and customs while providing access to essential services like medical care and education. </span><span style="font-weight: 400;">Given these profound impacts, it is clear that the transportation industry has undergone drastic changes since its inception, often driven by single innovations that redefine mobility. Consider the invention of the steamboat in 1787. Upon its introduction, passenger travel became widely accessible, enabling a stream of ideas and culture to diffuse across nations. Simultaneously, it transformed commercial shipping by reducing the time and cost required of transporting cargo. </span><span style="font-weight: 400;">Modern means of transportation have evolved to a network of defined highways and roadways. To keep order in this network, our transportation system requires a solution – one capable of handling its diverse array of challenges and complexities. Enter Artificial Intelligence (AI). </span><span style="font-weight: 400;">AI-driven solutions within the transportation system offer a dynamic and flexible approach. With the ability to continuously adapt to the fast-moving pace of our transportation network, AI has garnered the attention of many. In recent years, this growing interest has resulted in the acceleration of research at the intersection between AI and transportation, signifying the beginning of a tremendous shift in the way mobility is perceived. </span><span style="font-weight: 400;">The ever-increasing demand for transit in our society brings forth pressing issues such as environmental decay, traffic congestion, and safety risks. This paper will delve into AI applications that hold enormous potential for addressing these challenges. This paper will also </span><span style="font-weight: 400;">explore the multifaceted implications of AI in transportation, providing a detailed overview of its impacts arising in areas of efficiency, safety, and sustainability. Then, taking a step back, the </span><span style="font-weight: 400;">paper will outline broader implications for our society, specifically how AI-enabled changes in the transportation industry will impact society at large. Key considerations surrounding implementation in these sectors will be presented, outlining effective strategies associated with AI-enabled transportation. </span><span style="font-weight: 400;">The adoption of new technology in our society is normally accompanied by various risks. </span><span style="font-weight: 400;">Unfortunately, AI in transportation is no different. To ensure that the arrival of AI-enabled transportation does not take a turn for the worse, this paper will consider potential repercussions in data security, algorithmic bias, and user privacy. Furthermore, it will discuss the adequacy of current regulations related to AI transportation. </span><span style="font-weight: 400;">Lastly, the paper will focus on the future direction of AI in furthering the reliability of transportation systems. It will present information regarding research gaps and future goals, all with the aim of demonstrating the multitude of opportunities unfolding in the transportation industry. In doing so, this paper intends to demystify the seemingly “intimidating” sector of AI in transportation while emphasizing the importance of accountability in its integration.</span></p> Isabella Xu Murtaza Haque Song Fu Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 491 499 10.34190/icair.5.1.3210 Students' AI-generated Images: Impact on Motivation, Learning and, Satisfaction https://papers.academic-conferences.org/index.php/icair/article/view/3243 <p>In the contemporary society where the rapid development of generative AI (GenAI) infiltrate daily life, it is imperative for schools to keep up with the development. Future generations are expected to have GenAI skills and knowledge in the same way that traditional literacy is a recruitment condition today. School curricula need to integrate this new technology to support students' learning and development. Research on artificial in education (AIED) has reported on the challenges with the involvement of GenAI in teaching and learning, but also on GenAi as a study support. To not fall behind many schools have started out with AIED initiatives which creates a need for studying how GenAI could be applied in a useful way in teaching and learning activities. The research question that guided this study was: In what ways could the use of GAI in visual form support students' learning process and motivation in upper secondary school settings? The overall research strategy was a qualitative case study approach with investigator triangulation. Data were collected in a combination of observations at workshop sessions and semi-structured interviews. with teachers as well as students. In a six-step inductive thematic analysis data excerpts were coded, aggregated to categories, and presented. Findings indicate that GenAI tools for image generation, can have a positive effect on learning. At the same time that memorisation of information is supported, there were also a positive impact on motivation and student satisfaction. The involvement of image generation tools not a substitute, but rather a complement to traditional teaching and learning activities. The conclusion is that the use of AI in education can offer new learning opportunities, and with the increased use of GenAI, it is crucial for both students and teachers to keep pace. However, this would require that more time and resources for teacher professional development on AIED.</p> Cornelia Berg Liv Omsén Henrik Hansson Peter Mozelius Copyright (c) 2024 International Conference on AI Research 2024-12-16 2024-12-16 4 1 500 506 10.34190/icair.4.1.3243 Artificial Intelligence, Smart Topological Data Analysis and Chaos in Business Continuity Management: The Case of COVID-19 in Birmingham Airport https://papers.academic-conferences.org/index.php/icair/article/view/3329 <p>The latest state-of-the-art empirical methods from chaos theory have incorporated smart topological data analysis (STDA) combining chaos theory methods, topological machine learning, adaptive artificial intelligence systems, topological data analysis and fractal analysis methods for attractor reconstruction and the topological study of the dynamics, with impact on risk science and complexity research. In the current work, we apply a topological adaptive AI system to the study of Birmingham airport’s air traffic dynamics, and employ topological data analysis, chaos theory methods and multifractal analysis to research the resulting dynamics. Our results show the presence of a form of stochastic chaos with a low-dimensional attractor associated with a long wave dynamics in the pre-COVID-19 period, which continues in the COVID-19 crisis and subsequent recovering around a rising trend, with the topological AI system able to adapt to the COVID-19 crisis and predict with high performance the dynamics during this period. Multifractal analysis methods, applied to the adaptive topological AI system’s residuals, show that the dynamical noise affecting the chaotic attractor is multifractal, with a multifractal phase transition occurring during the COVID-19 recovery period. Implications of the methods and results for business continuity management are drawn.</p> Carlos Gonçalves Carlos Rouco Copyright (c) 2024 International Conference on AI Research 2025-02-03 2025-02-03 4 1 506 516 10.34190/icair.4.1.3329 The next generation of AI tools are coming, do we need to better regulate them? https://papers.academic-conferences.org/index.php/icair/article/view/3033 <p class="paragraph" style="margin: 0cm; vertical-align: baseline;"><span class="normaltextrun"><span style="font-size: 10.0pt; font-family: 'Calibri',sans-serif; color: black;">This research investigates the importance for robust regulation and legislation to properly govern the development of ‘Frontier Artificial Intelligence Systems’.&nbsp; To do this a comparison of approaches of both existing and proposed legislation has been taken which includes from the US, UK and EU. This has involved reading both summations, assessments and opinions from both academic writers and the media and making comparisons with other industry regulations issues such as Social Media.&nbsp;</span></span><span class="eop"><span style="font-size: 10.0pt; font-family: 'Calibri',sans-serif; color: black;">&nbsp;</span></span><span class="normaltextrun"><span style="font-size: 10.0pt; font-family: 'Calibri',sans-serif; color: black;">The key findings of this research highlight the different approaches being taken to the same problem. Legislation that came into force on August the 1<sup>st</sup> 2024 from the EU takes a safety-first approach, identifying risk levels from ‘unacceptable risk’ that would be prohibited, down to ‘minimal risk’ which would remain unregulated. This is compared to the UK White Paper, which advocated an innovation first approach with a secondary focus on safety.&nbsp;</span></span><span class="eop"><span style="font-size: 10.0pt; font-family: 'Calibri',sans-serif; color: black;">&nbsp;</span></span><span class="normaltextrun"><span style="font-size: 10.0pt; font-family: 'Calibri',sans-serif; color: black;">With the potential risks associated with AI enhanced cyber-attacks and the spread of disinformation across different platforms, this research emphasises the importance of strong regulation in relation to safety and ensuring this happens from the outset in the development of ‘Frontier AI’ and is not an afterthought.</span></span></p> Michael Aubrey Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 440 447 10.34190/icair.4.1.3033 “Should everyone have access to AI? " Perspectives on Ownership of AI tools for Security https://papers.academic-conferences.org/index.php/icair/article/view/3029 <p>Given the widespread concerns about the integration of Artificial Intelligence (AI) tools into security and law enforcement, it is natural for digital governance to strive for greater inclusivity in both practice and design (Chohan and Hu, 2020). This inclusivity can manifest in several ways, such as advocating for legal frameworks and algorithmic governance (Schuilenburg and Peeters, 2020), allowing individuals choice, and addressing unintended consequences in extensive data management (Peeters and Widlak, 2018). An under-reflected aspect is the question of ownership, i.e., who should be able to possess and deploy AI tools for law enforcement purposes. Our interview findings from 111 participants across seven countries identified five citizens viewpoints with respect to AI ownership of security-related AI: (1) Police and police-governed agencies; (2) Citizens who disassociate themselves; (3) Entities other than the police; (4) All citizens including themselves; and (5) No one or Unsure. The five clusters represent disparate perspectives on who should be responsible for AI technologies, as well as related concerns about data ownership and expertise, and thus link into broader discussions on responsibility for security, i.e., what deserves protection, how and by whom. The findings contribute theoretically to digitalization, smart technology, social inclusion, and security studies. Additionally, it seeks to influence policy by advocating for AI development that addresses citizen concerns, thereby mitigating risks, social, and ethical implications associated with AI. Crucially, it aims to highlight citizens’ concerns around the potential for malicious actors to exploit ownership of such powerful technology for harmful purposes.</p> Yasmine Ezzeddine Petra Saskia Bayerl Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 488 455 10.34190/icair.4.1.3029 How Is Fake News Spread? An Analysis of the Dissemination Process: Actors, Channels, and Motives https://papers.academic-conferences.org/index.php/icair/article/view/3054 <p>Even though fake news is widely recognized as one of the most serious threats in the post-truth era, there are still some gaps regarding fake news dissemination. To address these gaps, it is necessary to conduct theoretical research revising and analyzing the latest scientific developments on this topic paying attention to social, psychological, and technological contexts in which fake news is constructed and spread. Understanding this process might help to improve the media literacy of users by raising awareness and forming critical attitudes to by-products of the information environment. This article presents a literature review on fake news dissemination based on an analysis of 106 papers extracted from the Web of Science, Scopus, and Google Scholar databases. The author focuses on identifying the main actors who spread fake news. Additionally, a typology is proposed to organize knowledge about the motives behind creating fake news, based on social, psychological, and cognitive factors.</p> Anastasiia Iufereva Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 456 462 10.34190/icair.4.1.3054 Towards the Virtual - AI Beauty Pageants and Their Implications for Society https://papers.academic-conferences.org/index.php/icair/article/view/3132 <p>The dynamic growth of Artificial Intelligence is accelerating rapidly, making AI present in almost every aspect of human life. Digital assistants available on smartphones, smart homes, autonomous cars, and even recently launched AI beauty pageants. The very first Miss AI pageant, announced by the World AI Creator Awards (WAICA) program and Fanvue company, marks a new era in beauty contests. The main idea of Miss AI is to present an AI-based character, which is evaluated based on its beauty, poise, and response to typical beauty pageant questions. What makes it innovative is that Artificial Intelligence not only participates but also serves as a judge. This is because the jury, in addition to normal humans, includes virtual influencers, like Spanish digital mouse Aitana Lopez. Beauty contests have always been controversial - after all, they involve judging individuals by their appearance and setting beauty standards. However, including Artificial Intelligence in this phenomenon raises even more profound ethical questions and blurs the boundary between the virtual world and the real world. This article focuses on researching how the idea of Miss AI impacts its audience, who primarily belong to a social media-driven society. The author used content analysis and case study methods to examine the topic. Drawing on the example of one of the most popular virtual influencers, Aitana Lopez, and her increasing popularity on social media, especially on Instagram, the author reflects on the ethical implications of the Miss AI concept and its ongoing perception by society. Due to the continuous growth of AI beauty pageants, the article provides initial insights into the topic and suggests potential future developments. What can be said for certain is that recent years have brought a significant AI revolution that affects and will continue to affect a growing number of dimensions of human existence. However, this raises many concerns, increasingly centered around ethics.</p> Ida Knapik Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 463 471 10.34190/icair.4.1.3132 Exploring Student Perspectives on Generative AI in Requirements Engineering Education https://papers.academic-conferences.org/index.php/icair/article/view/3136 <p>The rapid development of generative AI (GenAI) technologies in recent years has enabled new opportunities as well as new challenges in higher education. While many studies in computer science have focused on GenAI in programming education, fewer have examined its possibilities and challenges in requirements engineering (RE). This study aims to explore the impact of GenAI on the pedagogical aspects of RE in higher education, focusing on the student perspective, to analyse how GenAI might influence learning experiences, knowledge acquisition, and skill development. The main research question to answer was: "What are the students’ perspectives of the integration of GenAI in the educational practices of requirements engineering?" An Action research strategy was employed, with one of the authors also serving as teacher in the investigated course. A mixed-methods approach was used to collect both qualitative and quantitative data from workshops and surveys. During the workshops, students used ChatGPT to generate and evaluate software requirements and compared these to manually crafted requirements. Thematic analysis of the qualitative data captured students’ perspectives, while survey data identified trends and preferences. Findings show that while students generally had a positive experience with GenAI, valuing its efficiency and the quality of generated requirements, they also recognized the need for human oversight to maintain accuracy. The study highlights both opportunities and challenges of using GenAI in RE education. While GenAI increased learning engagement and helped with brainstorming, students faced difficulties in creating effective prompts and found it time-consuming to refine AI-generated requirements. A hybrid approach, combining AI-generated and manually created requirements, proved most effective by balancing AI's advantages with human insights. Further research is needed on how GenAI could be effectively integrated into computer science education.</p> Nicklas Mellqvist Peter Mozelius Copyright (c) 2024 International Conference on AI Research 2024-12-04 2024-12-04 4 1 473 481 10.34190/icair.4.1.3136