International Conference on AI Research https://papers.academic-conferences.org/index.php/icair <p>The International Conference on AI Research (previously known as the European Conference on the Impact of AI and Robotics) has been run on an annual basis since 2021. Conference Proceedings have been published in 2021 and 2022 with a post Covid break in 2023 and authors have been encouraged to upload their papers to university repositories. In addition the proceedings are indexed by a number of indexing bodies.</p> <p>From 2022 the publishers have made all conference proceedings fully open access. Individual papers and full proceedings can be accessed via this system.</p> <p><strong>PLEASE NOTE THAT IF YOU WISH TO SUBMIT A PAPER TO THIS CONFERENCE YOU SHOULD VISIT THE CONFERENCE WEBSITE AT<a href="https://www.academic-conferences.org/conferences/icair/"> https://www.academic-conferences.org/conferences/icair/</a> THIS PORTAL IS FOR AUTHORS OF ACCEPTED PAPERS ONLY.</strong></p> en-US papers@academic-conferences.org (Louise Remenyi) sue@academic-conferences.org (Sue Nugus) Thu, 04 Dec 2025 00:00:00 +0000 OJS 3.3.0.13 http://blogs.law.harvard.edu/tech/rss 60 Integrating Ethical, Legal, and Technological Safeguards in Space-Focused Cyberbiosecurity: AI, Cloud, and Crew Considerations https://papers.academic-conferences.org/index.php/icair/article/view/4391 <p>Long-duration crewed missions and orbiting habitats such as the International Space Station (ISS) present unique intersections of biological and cybersecurity risks. Cyberbiosecurity, a hybrid field that combines biosecurity and cybersecurity in the investigation of system vulnerabilities, is being addressed across multiple domains of Earth but remains underexplored in space environments. The closed-loop life-support, modular robotics, and telemetric control systems aboard space stations create novel attack surfaces, while microgravity and radiation alter microbial behavior in ways that could exacerbate bio-contamination risks. Additionally, the use of artificial intelligence (AI) for equipment health monitoring, autonomous robotics, and crew support introduces new vulnerabilities, as adversarial inputs or model poisoning could compromise critical diagnostics and decision-making aids. Cloud-based infrastructures used for off-board data storage, analytics, and command relay further expand the threat surface, requiring rigorous cloud security, encryption, and isolation protocols to prevent unauthorized access or data exfiltration. This paper explores potential attack vectors in both cyber- and bio-informed arenas across launch, transit, and orbital habitats, and proposes forward-looking countermeasures for these proposed attacks. We outline a framework that incorporates ethical and legal considerations, including crew privacy rights and compliance with international space treaties and biosafety regulations. By combining AI-robust design principles, secure cloud architectures, and clear legal guidelines, our approach aims to present ideas to safeguard space-based biological operations, uphold crew well-being, and ensure mission resilience against emerging cyberbiological threats.</p> Dominique Dove, Kenneth Chamberland, Sotirios F. Karathanasis, Lucas Potter, Xavier-Lewis Palmer Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4391 Wed, 10 Dec 2025 00:00:00 +0000 Android Malware Detection (AMD) Based on Shallow Feature and Permission Correlation https://papers.academic-conferences.org/index.php/icair/article/view/4392 <p>There are apps for everything, from online banking, social software to shopping. They have become one of the most important tools in our daily lives. This even implies that a mobile device has stored most of personal information, including photos, credit cards, and communications. If an intruder succeeds in hacking into the mobile device, all private properties must suffer from the leakage threats. Undoubtedly, a malware is the commonest tool used by an attacker to compromise a mobile phone. In particular, it is often disguised as a popular application through an obfuscated or packed form. That is the main reason why it is difficult to distinguish a malware from the legal ones. In this article, we have adopted machine learning technique to develop a static analysis mechanism for Android malware detection based on shallow feature and permission correlation (AMD). AMD first analyses the Application Programming Interfaces of the target to detect all possible and hidden privilege threats. It then filters this obfuscation information using permission correlation to eliminate noise and identify meaningful malicious indicators. The proposed approach leverages the correlation patterns between permissions and API calls to distinguish suspicious behaviours from legitimate ones. Thus, AMD can extract all the representative shallow features to achieve the high detection rate. Simulation results have shown that AMD can outperform related works under the datasets of CICAndMal2017 and CICMalDroid2020, which confirms the effectiveness of shallow features and permission correlation.</p> Jung-San Lee, Yun-Yi Fan, Gah Wee Yong, Ying Chin Chen Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4392 Wed, 10 Dec 2025 00:00:00 +0000 Artistic intelligence: Distinguishing Human from AI-Generated Art Through Perception https://papers.academic-conferences.org/index.php/icair/article/view/4403 <p>With the new advances in technology, the ability of AI to produce human-like art questions established notions of creativity, authorship, and artistic value. This study explores the differences between human-created artwork and AI-generated artwork using human experience. We analyze whether or not AI-generated art can be distinguished from human-created art consistently based on qualities such as emotional effect, aesthetic depth, and originality. Our methodology involves a perception study where people rate and attempt to classify images as human or AI-created. The rating criteria are emotional effect, perceived beauty, detail richness, and subjective creativity. We also ask participants to see if they can identify common signs of AI-generated images, such as over-smoothing transitions, blurred details, and unnatural or flat viewpoints. We measure the accuracy of human classification, comparing the degree of consistency or inconsistency in their judgments. This transdisciplinary project aims to shed light on core differences in perception and structure between AI and human creativity. It also raises broader ethical and philosophical questions about creativity, ownership, and cultural value in this AI age. In particular, we question whether AI art models that have been trained from copyrighted content can produce original images that are just as good as human artwork. By comparing subjective human judgments, this research will seek to shed light on how AI art is understood, evaluated, and possibly distinguished from traditional human imagination. The results of this research will contribute to current discussions at the intersection of technology, art, and ethics, providing a foundation for more open and ethical AI art production.</p> Arush Jha, Anusha Nigam, Shreyas Kumar Copyright (c) 2025 International Conference on AI Research https://creativecommons.org/licenses/by-nc-nd/4.0 https://papers.academic-conferences.org/index.php/icair/article/view/4403 Tue, 06 Jan 2026 00:00:00 +0000 Could AI Podcasts Improve Students’ Academic Performance AND Engagement?: An Empirical Assessment https://papers.academic-conferences.org/index.php/icair/article/view/4455 <p>In recent years, podcasts have emerged as a significant educational tool that influences students' learning, academic performance, and retention, particularly within the context of business education. This study investigates the potential of Artificial Intelligence (AI)-generated podcasting as an innovative instructional tool to enhance student academic performance in business education. Leveraging generative AI to create short, topic-specific audio summaries, we integrated podcast episodes into an undergraduate business analytics course and evaluated their impact on student outcomes. Using a quasi-experimental design, we compared performance metrics between students who received AI podcast supplements and a control group with traditional materials. Survey and focus group data were also collected to assess engagement, comprehension, and perceived value. The study highlights how tailored, on-demand audio content can support student success in data-driven, concept-heavy business courses. The impact of podcast listening on students’ learning, academic performance, and retention in business education is multifaceted. The integration of podcasts into educational practices offers distinct advantages, including enhanced listening skills, greater engagement through relatable content, and accessibility that support self-directed learning. Nevertheless, educators must navigate potential challenges in implementation to fully leverage the benefits of this dynamic learning tool. Future research should continue to explore innovative podcast applications and best practices to optimize learning outcomes in business education.<br><br></p> Gokhan Egilmez, Karolina Schneider Copyright (c) 2025 International Conference on AI Research https://creativecommons.org/licenses/by-nc-nd/4.0 https://papers.academic-conferences.org/index.php/icair/article/view/4455 Wed, 21 Jan 2026 00:00:00 +0000 Artificial Intelligence leads to Circular Economy: A Bibliometric Review and Future Agenda https://papers.academic-conferences.org/index.php/icair/article/view/4324 <p>Despite extensive research on industry 4.0 and circular economy (CE). There is limited research on nexus between artificial intelligence (AI) and circular economy. AI seems to be driving force for revolutionizing in businesses and industries for unlocking economic, environmental, and social benefits. We investigated how AI transform from linear to circular business models. Authors selected 105 peer-reviewed articles from Web of Science database by using bibiliometric analysis. Authors analyzed the data by using VoSviewer Software. The four core clusters were identified, (1) circular economy a pathway to sustainable business management, (2) big data models enhance CE outcomes, (3) I4.0 technologies leads to future of CE and (4) digitalized supply chains for sustainable development. This review advances our understanding on AI and circular economy in the existing literature due to less focus by prior scholars. More importantly, this review contributes to shifting the focus to technological perspective within circular economy, diverging from traditional linear model based on economic views. This review suggested companies should adopt AI as it plays a pivotal role in facilitating the shift to a circular economy by reshaping and enhancing existing models of product design, manufacturing, consumption, repair, regeneration, recovery, and end-of-life management while simultaneously improving the efficiency of waste management. This research provides fresh perspectives on AI and CE to better understand AI as an opportunity rather than cost due to ongoing fourth industrial revolution.</p> Zuhair Abbas, Jaroslav Belas, Rasa Smaliukiene Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4324 Thu, 04 Dec 2025 00:00:00 +0000 How GenAI Use Cases Emerge and Evolve in Organizations: Analysis of a Case Study https://papers.academic-conferences.org/index.php/icair/article/view/4270 <p style="font-weight: 400;">The present study investigates how Generative Artificial Intelligence (GenAI) use cases emerge and evolve within organisational settings, challenging dominant "blueprint" perspectives that assume static implementation pathways. While Generative AI offers transformative potential by generating novel content and workflows, research often overlooks the dynamic, socio-technical processes through which its value is realised in practice. Addressing this gap, we conduct a longitudinal case study analysing 18 months of GenAI implementation within the pre-sales processes of a manufacturing firm. Our analysis reveals that GenAI use cases evolve dynamically across two key dimensions: process formalisation (structured vs. emergent) and user type (internal vs. external). Findings indicate use cases emerge organically through user interaction, organizational learning, and strategic imagination, progressing through four distinct phases. This evolution is enabled by specific conditions, including organisational legitimacy, technological flexibility, reflexivity, experimentation, and strategic foresight. The study develops a dynamic framework mapping this use case emergence, identifying key patterns and enabling conditions. Theoretically, this research contributes by shifting focus from static GenAI adoption models to dynamic, path-dependent processes, extending affordance theory through "evolutionary affordances," and deepening understanding of value co-creation and co-destruction dynamics over time. Practically, the framework assists organizations in managing GenAI adoption as an emergent process, guiding capability development and investment strategies for expanding GenAI applications across diverse organizational contexts.</p> Fabrizio Amarilli Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4270 Thu, 04 Dec 2025 00:00:00 +0000 Generative AI Competencies – Framework and Maturity Model for Users in Their Work Settings https://papers.academic-conferences.org/index.php/icair/article/view/4371 <p>The increasing use of generative artificial intelligence (GenAI) is changing work processes in companies and requires new competencies from employees. While existing competency models primarily focus on general AI, they do not sufficiently account for the unique features of generative systems. The aim of this article is therefore to develop a specific AI competency framework and maturity model for the successful and reflective use of GenAI in a corporate context. Based on a design science approach, relevant skills were tested for their transferability, supplemented with GenAI-specific competencies, and operationalized along defined maturity levels. A consultation with experts was conducted to evaluate the model. The result encompasses three competency areas – digital/technological, social, and cognitive competencies – with a total of 18 individual competencies, mapped to three maturity levels of GenAI use in companies. The model supports researchers and practitioners alike in systematically assessing competency levels within companies, identifying potential areas for improvement, and developing targeted strategies for competency development.</p> Sascha Armutat, Malte Wattenberg, Nina Mauritz, Swetlana Franken Copyright (c) 2025 International Conference on AI Research https://creativecommons.org/licenses/by-nc-nd/4.0 https://papers.academic-conferences.org/index.php/icair/article/view/4371 Thu, 04 Dec 2025 00:00:00 +0000 Seeds of Deception: Securing AI-Driven Agriculture Against Adversarial Threats https://papers.academic-conferences.org/index.php/icair/article/view/4386 <p>The integration of artificial intelligence (AI) and the Internet of Things (IoT) into agriculture is redefining how crops are cultivated, monitored, and protected. This research builds upon an implemented IoT-driven plant monitoring prototype combined with a convolutional neural network (CNN) for leaf disease classification. The system achieved 96% accuracy on benchmark datasets and 76% accuracy on live samples, demonstrating the technical promise of digital agriculture. However, while effective in functionality, the prototype highlights a broader concern: agricultural digitization is evolving faster than its security safeguards, creating fertile ground for adversarial exploitation. To address this gap, the study applies threat modeling to the implemented prototype, identifying vulnerabilities in sensor integrity, data pipelines, and AI model robustness. Potential adversarial vectors include sensor spoofing, data poisoning, and adversarial image inputs capable of undermining disease detection accuracy. These findings serve as a foundation for expanding the analysis toward two emerging risks that elevate agricultural cybersecurity into the domain of biowarfare.</p> <p>First, the increasing reliance on cloud-hosted genetically modified organism (GMO) repositories presents a novel threat. Adversarial prompt engineering attacks on agricultural AI assistants could leak or corrupt sensitive genetic data, embedding harmful traits within seeds. Such tampering collapses the boundary between digital compromise and biological sabotage, threatening food security at scale. Second, agricultural AI infrastructures are increasingly dependent on high-density data centers that consume large volumes of potable water for cooling. A targeted cyber-physical campaign that overloads these facilities could deliberately drain water reserves, induce man-made drought conditions, and destabilize surrounding ecosystems. This risk reframes data centers not only as computational assets but also as critical ecological choke points. By combining the practical threat modeling of an IoT–AI prototype with conceptual extensions into GMO and data center vulnerabilities, this work establishes a novel framework for agricultural cyber-biosecurity. It underscores the urgency of interdisciplinary safeguards to prevent the transformation of smart farming from a tool of resilience into a vector of biowarfare.</p> Ruchira Balkudru Bhat, Shreyas Kumar Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4386 Thu, 04 Dec 2025 00:00:00 +0000 Teaching Responsible AI Entrepreneurship: Experiences from the Erasmus+ Pathfinder Project https://papers.academic-conferences.org/index.php/icair/article/view/4376 <p>The rapid integration of artificial intelligence in society presents both profound opportunities and urgent challenges for higher education. As future professionals and entrepreneurs will increasingly rely on AI-driven tools, it is essential that universities cultivate AI literacy, critical thinking, and ethical reasoning. In response, this paper presents the design of a new course: AI in Business: Ethics, Applications, and Entrepreneurship, developed under the Erasmus+ Pathfinder project and launched in September 2025. The course is grounded in the UNESCO AI Competency Frameworks, Design Thinking and ENTRECOMP Framework and supports the European Union’s Digital Education Action Plan by promoting responsible, human-centered AI integration.&nbsp;Structured around ten thematic modules, the course introduces students to AI ethics, human-centered design, technical applications, and innovation strategies. Delivered online, it leverages group projects, case studies, and real-world problem-solving. A distinctive feature is the use of large language models as learning partners framing AI not merely as a tool, but as a co-creative agent in cognitive and entrepreneurial development. Students work in teams to develop and pitch AI-based business ideas, supported by coaching and expert feedback.&nbsp;The course is informed by a constructivist and sociocultural pedagogical foundation, where knowledge is co-constructed through active engagement and mediated by cultural tools, here, generative AI. It also draws on human-centered design thinking and transformative learning theory, encouraging students to reframe assumptions and design solutions with ethical and societal impact in mind.&nbsp;Though the course is set to end in December 2025, this paper outlines the instructional model, theoretical foundation, and implementation strategy. It offers a scalable, interdisciplinary framework for embedding ethical, inclusive, and innovation-driven AI education within higher education, while empowering educators and students to engage critically and constructively with the evolving digital landscape.</p> Olga Bogdanova Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4376 Thu, 04 Dec 2025 00:00:00 +0000 Investigating Factors Influencing the Utilization of ChatGPT Among Students in Higher Accounting Education: Expanding the Technology Acceptance Model https://papers.academic-conferences.org/index.php/icair/article/view/4385 <p>The study investigated the factors influencing the behavioural intention to utilize ChatGPT among accounting students in higher education. Drawing on the technology adoption model, the research explores how self-efficacy and perceived risk affect their intention to adopt ChatGPT as a learning aid. The study adopted a simple random sampling technique and gathered data from 200 accounting students from Ghanaian universities. The data gathered was measured using PLS-SEM technique. The study found that the most crucial factor influencing the behavior intention toward ChatGPT among accounting students is perceived usefulness. The study also found that attitude is also very significant factor affecting the behavioural intention to adopt ChatGPT. Perceived usefulness and perceived ease of use were found to have positive relationships on student’s attitude toward ChatGPT adoption. The study also found that whereas self-efficacy showed a positive effect on attitude, perceived risk showed a negative effect on attitude toward ChatGPT adoption. The findings underscore the direct influence of self-efficacy and perceived risk in AI adoption and suggest that educational policies should prioritize training programs and promote accountability frameworks around AI tools.</p> Felix Buabeng-Andoh, Charles Buabeng-Andoh, Drahomira Pavelkova Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4385 Thu, 04 Dec 2025 00:00:00 +0000 Organizational and Social Impact of AI-Driven Fake Review Detection in E-commerce https://papers.academic-conferences.org/index.php/icair/article/view/4207 <p style="font-weight: 400;">The rapid growth of e-commerce and mobile transactions has fueled an increase in fraudulent activities with fake online reviews posing a major threat to businesses and consumers. Such reviews, often generated by seller, third parties, or bots, distort reputations, mislead consumer decisions, and erode trust. To address this, organizations are adopting AI-driven detection systems that offer scalability and efficiency while also raising challenges around bias, transparency, and ethical oversight. Drawing on insights from a doctoral dissertation focused on real-time detection of fake reviews using AI, this article examines how these systems shape platform governance, seller behavior, and consumer perceptions of fairness and trust. It highlights implications for risk management, compliance, and regulation, while also assessing social consequences such as false positives, opacity, and perceived bias. The paper recommends strengthening platform governance, ensuring fair treatment of sellers, and enhancing internal risk management at the organizational level, while at the social level, emphasizing consumer trust-building, fairness and legitimacy, and embedding ethical safeguards to protect privacy, accountability, and literacy. Finally, this paper calls for a responsible, human-centric approach to AI deployment that balances automation needs with ethical oversight, enhances consumer confidence, improves platform integrity, and promotes organizational efficiency.</p> Naren Chandra Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4207 Thu, 04 Dec 2025 00:00:00 +0000 AI-based Chatbots in Customer Communication: A Comparative Study of Germany and China https://papers.academic-conferences.org/index.php/icair/article/view/4166 <p>This study analyzes the implementation, usage, and acceptance of AI-supported chatbots in customer communication in Germany and China. Based on a systematic literature review and seven expert interviews with representatives of international technology companies, the research investigates both technological and cultural contextual factors influencing chatbot adoption. The study highlights how varying regulatory frameworks, infrastructure readiness, and cultural attitudes shape the deployment and effectiveness of AI-driven communication tools in different markets. Findings indicate that in Germany, stringent data protection laws, regulatory complexity, and cultural hesitations around privacy and automation present major obstacles to chatbot integration. In contrast, China’s innovation-friendly regulatory environment, extensive government support, and high technology affinity foster rapid deployment and wide user acceptance of AI-based solutions. Moreover, differences in organizational priorities emerge, with Chinese companies emphasizing speed, platform integration, and functionality, while German firms focus on data security, compliance, and personalized, trust-building customer interactions. The study further explores variations in user experience and communication design, underscoring the importance of culturally adapted interfaces and context-sensitive implementation strategies. Derived from these insights, the paper offers strategic recommendations for businesses to successfully implement AI chatbots in diverse regulatory and cultural landscapes. Additionally, it outlines directions for future research, particularly regarding the development of agentic AI, multimodal interaction capabilities, and sustainable deployment models that consider ethical, infrastructural, and environmental aspects.</p> Anja Corduan-Claussen, Alexandra Grubert Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4166 Thu, 04 Dec 2025 00:00:00 +0000 From Disruption to Innovation: Integrating Active Learning in AI-Resilient Assessment Design https://papers.academic-conferences.org/index.php/icair/article/view/4274 <div><span lang="EN-US">Artificial Intelligence (AI) and generative learning technologies are transforming the landscape of higher education. With tools capable of producing essays/reports, solving complex problems, and simulating critical thought, traditional assessment practices are becoming increasingly vulnerable. The rapid, widespread, and easy accessibility of generative AI raises concerns about academic dishonesty, plagiarism, and the erosion of original thought. This disruption calls for a reimagining of assessment models that are not only robust in the face of AI but also pedagogically sound. Active Learning Strategies (ALS) offer a pathway forward. Rooted in constructivist and experiential learning theories, ALS emphasizes student participation, collaboration, and real-world application. By shifting from passive learning methods to active learning engagement, these strategies promote higher-order thinking and personal investment in learning, qualities that AI cannot easily replicate. This paper aims to analyze how ALS can underpin AI-resilient assessment design, drawing insights from a scoping literature review, an applied case study from the UNESCO-ESCS Chair in Portugal and results from inquiries to students.</span></div> Natacha Jesus-Silva, Maria Dos-Santos, Maria Duarte Bello Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4274 Thu, 04 Dec 2025 00:00:00 +0000 IT Governance, Audit and Risks Management in Banks: A Narrative Literature Review and Future Research Agenda https://papers.academic-conferences.org/index.php/icair/article/view/4352 <p>The increasing digitalization of the banking sector has significantly reshaped institutional control mechanisms and governance structures, particularly in the realm of Information Technology (IT). This narrative literature review examines the evolving role of IT audit and governance in banks, with a specific focus on how these mechanisms contribute to risk management and regulatory compliance. Drawing upon a corpus of 26 peer-reviewed studies spanning from the early 2000s to 2025, this paper offers an integrated framework to understand the complex interrelations between IT governance, internal auditing, emerging technologies, and institutional oversight. Our literature review situates IT audit practices within broader organizational, cultural, and regulatory contexts. It explores how frameworks such as COBIT and COSO serve not only as technical guides but also as institutional artifacts that shape organizational behavior, strategic decision-making, and normative compliance. The review reveals that the role of IT audit has expanded from a purely technical function to a strategic enabler of trust, transparency, and accountability within financial institutions. Furthermore, audit committees and specialized board-level IT committees are shown to play a critical role in translating technological risks into governance priorities, thereby fostering a culture of proactive risk mitigation. Our analysis addresses the competencies of IT auditors, emphasizing the increasing demand for specialized skills in cybersecurity, data governance, and AI-integrated systems. The findings suggest that organizations with robust IT governance structures and trained audit personnel are better equipped to address technological disruptions and regulatory pressures. Moreover, the integration of Artificial Intelligence (AI) into audit processes is identified as both a transformative opportunity and a governance challenge. This paper contributes to the literature by providing a picture that connects technical auditing practices with broader sociotechnical systems. It identifies critical gaps in current audit practices, highlights the importance of organizational culture and ethics in IT governance, and proposes avenues for future research, particularly on the intersection of AI, audit methodologies, and institutional compliance.</p> Paola Demartini, Flavia Cocuccioni Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4352 Thu, 04 Dec 2025 00:00:00 +0000 The Legal Framework of Artificial Intelligence in the European Union: Regulation, Liability, and Sectoral Challenges. https://papers.academic-conferences.org/index.php/icair/article/view/4331 <div><span lang="EN-US">This paper provides a systematic analysis of the emerging legal profiles in the artificial intelligence ecosystem, structured along three interdependent conceptual axes. Firstly, it examines the multi-level regulatory framework taking shape in the European Union. The EU Regulation 2024/1689 is critically explored in its risk-based approach, with particular attention to the categories of high-risk AI systems. The synergies and tensions with the legal framework governing data circulation in the Union are analyzed, with a profound influence in terms of compliance on data-driven technologies. This includes the EU Regulation 2016/679, which directly addresses crucial issues such as automated decisions and profiling; the EU Regulation 2022/868 on data altruism mechanisms and the reuse of public data; the EU Regulation 2023/2854, which introduces rules related to accessing data generated, among others, by IoT and industrial devices, as well as the EU Regulation 2025/327. Secondly, the complex issue of AI liability is evaluated, in the dual dimension of and safety. Particular attention will be paid to the now dismissed EU regulatory proposals. A specific focus will deal with the possible applicability of the product liability discipline. Finally, several sectoral criticalities are identified through case studies in healthcare and transport domains, evaluating for each the difficult balance between potential benefits and risks from algorithmic biases and systemic discrimination. The methodology combines dogmatic analysis, legal comparison, and concrete case studies, contributing to the debate on harmonization between technological innovation, protection of fundamental rights and sustainable legal governance models in the AI era.</span></div> Valentina Di Gregorio, Matteo Turci, Monica Gigola Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4331 Thu, 04 Dec 2025 00:00:00 +0000 Bridging the AI Governance Gap: Ethical and Regulatory Imperatives for Generative AI in Nigeria https://papers.academic-conferences.org/index.php/icair/article/view/4129 <p>As generative artificial intelligence (AI) technologies—such as <em>ChatGPT</em>, <em>DALL·E</em>, and other large language and image models—become increasingly mainstream, they introduce new ethical, legal, and governance challenges that are particularly urgent in developing countries. Nigeria, Africa’s most populous nation and a regional technology hub, offers a compelling case study of how these technologies are being adopted in environments with minimal regulatory infrastructure and limited public awareness. This paper examines the ethical and societal implications of generative AI in Nigeria and interrogates the country's preparedness to manage these risks. Despite the creation of the National Centre for Artificial Intelligence and Robotics (NCAIR) in 2020 and the recent passage of legislation such as the Nigeria Data Protection Act (2023) and the Startup Act (2022), Nigeria lacks a unified national AI formal risk classification systems, or sector-specific ethical guidelines. These gaps are important given the widespread, unregulated use of generative AI tools in education, politics, and digital commerce. In higher education, students increasingly rely on generative AI for assignments and projects, raising concerns about academic integrity in a system already strained by infrastructural deficits. Meanwhile, in the political domain, deepfake videos and AI-generated misinformation have circulated in election periods, threatening democratic stability in a media world prone to disinformation and weak content regulation. The paper compares Nigeria’s regulatory trajectory with global trends, particularly the European Union’s Artificial Intelligence Act and similar initiatives in Kenya, South Africa, and Rwanda. It highlights how Nigeria’s reactive approach to AI governance contrasts sharply with more proactive global models. Sectoral analysis reveals risks including digital labour displacement, cultural misrepresentation through foreign-trained models, algorithmic bias, and the erosion of public trust. Ultimately, the study calls attention to Nigeria’s urgent need for a comprehensive, context-sensitive AI ethics and governance framework. Through an analysis grounded in local realities and informed by global comparisons, the paper contributes to broader conversations about equitable, responsible AI adoption in the Global South.</p> Oluwatayofunmi Durodola Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4129 Thu, 04 Dec 2025 00:00:00 +0000 Building Trust through Transparent AI Governance: Embedding Ethical Oversight into Academic Curricula Development https://papers.academic-conferences.org/index.php/icair/article/view/4316 <p>As artificial intelligence (AI) technologies become increasingly embedded in higher education, from adaptive learning platforms to algorithmic assessment tools, there is a growing imperative to examine the ethical and governance implications of such systems. While AI holds significant promise for improving efficiency and personalisation in education, its deployment also raises concerns around surveillance, algorithmic bias, data privacy, and equity. This article critically explores how transparent AI governance frameworks can be systematically integrated into academic curricula. It argues that ethical oversight and trust must be embedded not merely at the institutional policy level but within the design, delivery, and evaluation of educational programmes themselves. Drawing on interdisciplinary insights from digital governance, education policy, and data ethics, the paper develops a conceptual model for curriculum reform, articulates the ethical dimensions of AI in education, and presents illustrative case studies of institutions that have begun embedding such oversight into their teaching practices. By aligning curriculum design with emerging ethical principles, this study proposes that universities and higher education institutions can develop graduates who are not only technically proficient but ethically literate, capable of navigating and interrogating the socio-technical dimensions of AI in their future professions.</p> Kelechi Ekuma Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4316 Thu, 04 Dec 2025 00:00:00 +0000 Comparative Study of AI and Human Evaluation for Student Website Projects https://papers.academic-conferences.org/index.php/icair/article/view/4301 <p>Artificial intelligence tools based on large language models are increasingly being adopted across a wide range of fields, including higher education. Given the substantial workload often faced by educators, these tools offer promising potential to assist in the evaluation of student work. However, empirical research on their reliability—particularly in assessing practical, design-oriented assignments such as student-developed websites—remains limited. This study aimed to investigate the ability of various AI tools to evaluate student website projects and the consistency between the evaluations given by AI tools and human instructors using the same criteria. Based on a literature review, a set of evaluation criteria was developed across three categories: user interface (UI), user experience (UX), and code quality. Each student project included a website prototype and the corresponding implementation code. Nine student projects were evaluated independently by seven AI tools and HI, using a Likert scale. To reduce variability, all AI tools were provided with the same evaluation prompt. The Wilcoxon signed-rank test revealed no statistically significant differences in many evaluation criteria between AI tools and HIs, suggesting general similarity in overall scoring. On the other hand, the Spearman correlation analysis revealed low consistency in how AI tools and HI evaluated specific aspects of the projects. This indicates that while the evaluation provided by AI tools and HIs may appear similar at a surface level, their underlying judgment patterns—particularly regarding certain criteria of UI/UX design and code quality—can diverge. However, ChatGPT-4.5 and ChatGPT-4o delivered particularly promising outcomes. From an educational perspective, the study results highlight the importance of treating AI tools as supportive assistants rather than autonomous evaluators—at least for now—especially in domains involving subjective or context-sensitive judgment. Identifying where AI tools’ evaluations align or conflict with human judgment provides valuable insight into the appropriate use, potential, and limitations of such tools in academic evaluation.</p> Lidia Feklistova, Artur Kašnikov Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4301 Thu, 04 Dec 2025 00:00:00 +0000 Organisational AI Culture: A Model at the Nexus of Human Resources, Management, and AI Technology https://papers.academic-conferences.org/index.php/icair/article/view/4264 <div><span lang="EN-GB">Inarguably, artificial intelligence (AI) is redefining business models and strategies, evolving organisational structures, systems, processes, and human resource management (HRM). What is less clear is how AI is impacting organisational culture and the short-, mid-, and long-term implications of this transition to a more advanced digital state of operations. Furthermore, understanding what role culture plays in influencing employee populations to harness the potential of algorithm-based tools and resources remains an under-investigated research area in business management. The research in this study explores evolving institutional dynamics between organisational culture, HRM, and broader employee populations, coexisting to achieve business objectives in the age of AI. This study takes a quantitative approach, surveying 431 business managers’ perceptions of organisational culture and intention to adopt AI technologies in the workplace. A series of hypotheses is investigated, and the results contribute to the development of a conceptual model. We propose a model which centralises AI culture as a point of convergence of employees, resource management, and AI technologies to optimise strategic technological investments. Our research suggests that there is a pivotal role HRM plays in seamlessly integrating AI technologies and employees within organisations to develop AI culture.&nbsp; This paper extends understanding and knowledge of the evolving dynamics between AI and organisational culture within commercial organisations.</span></div> Piper Frangos, Carina Paine Schofield Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4264 Thu, 04 Dec 2025 00:00:00 +0000 Human-centered AI in Healthcare – Balancing Patient Autonomy and Physician Judgment https://papers.academic-conferences.org/index.php/icair/article/view/4309 <p>This article outlines ethical issues related to integrating artificial intelligence (AI) into shared decision making (SDM), focusing on how to meet: (1) the need for explainability in enacting autonomy, (2) the need for respecting patients’ values and preferences in treatment decisions, and (3) the impact of AI on physician expertise. First, it is argued that the kind of explainability required to support patient and physician autonomy can be met through rigorous model validation combined with context-sensitive post hoc explanations. Next, turning to a patient perspective, the article argues against the assumption that having AI pre-rank treatment recommendations undermines patient autonomy and therefore ought to be avoided. Instead, the article recognizes AI’s potential to reduce cognitive overload and emphasizes balancing AI-guided decision-making properly. Subsequently, the physician’s perspective is considered, analyzing how AI impacts physician expertise, particularly in light of automation bias, deskilling, and the erosion of practice-based judgment. The article warns against a shift toward actuarial decision-making driven by algorithmic risk stratification, which may compromise core ethical principles. The article concludes by promoting human-centered AI integration to enhance human agency—empowering patients to make informed choices and allowing physicians to exercise sound clinical judgment.</p> Anne Gerdes Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4309 Thu, 04 Dec 2025 00:00:00 +0000 The Impact of the Fusion of Human and Artificial Intelligence on Changes in the Generation and Protection of Innovations https://papers.academic-conferences.org/index.php/icair/article/view/4282 <p>Artificial intelligence (AI), as a unique multi-innovation, is the result of the action of human intelligence in an anthropocentric innovation model. The intensive development of innovative and autonomous capabilities of AI is a challenge for the transformation of the anthropocentric innovation model. The main aim of the research is to highlight the opportunities and challenges in the field of generating and protecting innovations, which will arise as a result of the fusion of human intelligence and AI. By using a proactive approach, the authors offer insights into the fusion of human intelligence and AI, as a brand-new innovation resource that will affect: the emergence of a new hybrid innovation model, changes in the structure of innovations, the emergence of multi-innovations, the redefinition of the concept of innovation portfolio, the conception and implementation of new intellectual property (IP) policies that will permit the establishment of a balance between human autonomy and the absence of discrimination against AI, as well as adequate legal protection for innovations generated by the fusion of human intelligence and AI. The research results show that the fusion of human intelligence and AI simultaneously represents: a stimulus for redefining and improving the innovative capabilities of human intelligence and the continuous development of innovative and autonomous capabilities of AI, as well as a catalyst for positive changes in the innovative model, which still remains focused on humans. In the near future, the fusion of human intelligence and AI will become a framework for research with scientific, economic, legal and political significance, and a radical systemic impact on society as a whole.</p> Ana Lucija Gojakovic, Dejan Jeremic Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4282 Thu, 04 Dec 2025 00:00:00 +0000 Exploring the Dual Nature of AI in Education: Emphasising the Importance of Proactive Strategies to Ensure its Ethical Use https://papers.academic-conferences.org/index.php/icair/article/view/4265 <p>The increasing availability of advanced analytics tools such as Artificial Intelligence (AI) and Machine Learning (ML) are driving almost every industry to change their practices. Among which, the Chatbot Technology (ChatGPT), a computer programs that simulates human conversation with an end user has witnessed vast advancements. Not long time ago the impact of ChatGPT and similar AI language models in education sector was predicted to be transformative by reshaping the way students, teachers, and institutions interact with educational content (Agarwal et al., 2024). Arguably, they are already playing an increasingly significant role in the education sector, with a wide range of applications that are transforming teaching, learning, assessment and students’ experiences. &nbsp;While ChatGPT and similar AI tools offer significant benefits in Higher Education (HE), including personalised learning, enhanced student engagement, improved collaboration, and greater accessibility, they also raise critical concerns about academic integrity and dishonesty. Central to these concerns is the potential for plagiarism, as students may rely on AI/ChatGPT tools generated content in ways that undermine original thought and academic honesty. In fact, this is a current dilemma for many HE providers in the UK and not only, among which is The University of Hertfordshire (UH) which is the case of this paper. &nbsp;Therefore, aim of this ongoing project is to explore the dual nature of AI/ChatGPT Tools in education, emphasising the importance of proactive strategies to ensure its ethical use. This article provides a reflective analysis of the initial strategies implemented to promote ethical AI/ChatGPT tools use within one of the largest postgraduate taught courses with 1000+ students at the UHBS. In this course students are asked to develop research proposals for their final postgraduate projects. The AI implementation strategies were designed primarily to reduce plagiarism by encouraging responsible engagement with AI/ChatGPT tools. Based on the course data analyses, the article offers comprehensive, practical recommendations for educators to foster fairness, uphold academic standards, and guide students in the ethical and responsible use of AI technologies in academic settings.</p> Ketty Grishikashvili Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4265 Thu, 04 Dec 2025 00:00:00 +0000 Deepfake Detection: Human Performance Versus AI Tools – A Comparison of Accuracy and Effectiveness https://papers.academic-conferences.org/index.php/icair/article/view/4070 <p>Image manipulation is a phenomenon much older than digital image handling and generative artificial intelligence (GenAI). In the digital era, researchers have made a distinction between cheapfakes and deepfakes. The creation of cheapfakes requires a relatively low technical editing level and does not depend on any GenAI technology. This study had a focus on deepfakes only and explored the role of GenAI in the generation of deepfakes as well as the role of AI in tools for detecting deepfakes. The term deepfakes refers to high-quality and synthetic media content created with the use of deep learning and generative artificial intelligence. Recent advances in deepfake generation have made such content even more realistic, making it harder to identify. The rise of AI-convincing content is a growing social issue that poses serious challenges for its detection. Although the importance of deepfake detection is widely recognized, research comparing the performance of humans and AI on deepfakes analysis is still in its early stages. This study addresses this gap by conducting a comparison between human analysis and AI-based detection tools to evaluate their accuracy and effectiveness in identifying deepfakes. The testing was based on a set of AI-generated images related to the Israel–Hamas war, which were circulated on social media during 2023-2024. Future research should focus on testing a broader range of AI-based detectors to evaluate their effectiveness against different types of deepfakes, such as video, music, audio, text, including less sophisticated AI-generated manipulations as cheapfakes.</p> Anastasiia Iufereva, Peter Mozelius Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4070 Thu, 04 Dec 2025 00:00:00 +0000 Reimagining Professional Development in the Age of Artificial Intelligence https://papers.academic-conferences.org/index.php/icair/article/view/4237 <p>As Artificial Intelligence (AI) reshapes education, professional development (PD) must go beyond tool training to foster critical, meaningful integration. Initial PD should introduce AI’s uses and challenges, but also address the impact on teaching and learning. This paper explores and reflects upon Phase II of the FAITH project, a transatlantic design-based initiative developing an AI and Education (AI&amp;ED) model for higher education. Effective AI pedagogy is grounded in socially constructed, hands-on experiences where educators design lessons, generate content, and critically assess AI outputs. Such approaches build confidence, competence, and prevent mechanical adoption. Leadership and policy must further support a dual PD strategy: immediate classroom applications alongside preparation for broader societal shifts. Early FAITH findings show introductory courses spark essential dialogue, but PD must remain dynamic, ethical, and intentional. Phase II combines theoretical exploration (e.g., sustainability, ethics) with context-relevant practice. Ultimately, AI&amp;ED should be understood as a lifelong professional learning journey.</p> Jimmy Jaldemark, Martha Cleveland-Innes, Marcia Håkansson Lindqvist, Peter Mozelius Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4237 Thu, 04 Dec 2025 00:00:00 +0000 Governing Generative AI at Scale: Institutionalizing Alignment for Organizational Purpose https://papers.academic-conferences.org/index.php/icair/article/view/4374 <p>The rapid integration of generative AI (GenAI) into core organizational infrastructure requires a rethinking of<br />traditional governance models. This paper explores how organizations can effectively govern GenAI at scale while<br />maintaining strategic, ethical, and societal alignment. By synthesizing key theoretical frameworks, including institutional<br />theory and dynamic capabilities, we propose a conceptual framework organized around three interconnected domains:<br />Model Stewardship, Operational Alignment, and Strategic Guardrails. We contend that scalable governance must evolve<br />from static compliance to a dynamic, adaptable organizational capability. The paper concludes with the introduction of the<br />GenAI Governance Maturity Model (GAI-GMM), a research-based tool for institutional alignment.<br /><br /></p> Mitt Nowshade Kabir Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4374 Thu, 04 Dec 2025 00:00:00 +0000 LangGraph-Orchestrated LLM Agents for Scalable Movie Knowledge Graphs and Question Answering https://papers.academic-conferences.org/index.php/icair/article/view/4142 <p>Recent advances in large language models (LLMs) and agent-based orchestration are transforming automated knowledge graph (KG) creation as well as robust question answering in complex domains. We present a modular, multi-agent system that extracts, integrates, and reasons over diverse NoSQL movie data, powered by state-of-the-art LLMs such as GPT-4.1. Our architecture converts unstructured plots, cast/crew metadata, and numeric attributes into high-fidelity KGs - enabling both natural language and programmatic queries. To maximize reliability and flexibility, the system unifies multiple retrieval strategies - keyword search, vector similarity, knowledge graph querying, and summarization - each deployed as an autonomous pipeline. Parallel orchestration via LangGraph supports adaptive engine selection, concurrent execution, and robust answer verification with LLM ensemble “jury” scoring. Critically, the framework features comprehensive observability, allowing detailed monitoring and analysis of agent decisions, pipeline performance, and query outcomes. By treating each retrieval method and LLM as a specialized agent, our approach delivers scalable, explainable, and highly accurate results (up to 97%), significantly surpassing monolithic solutions. This agentic, observable architecture paves the way for next-generation autonomous analytics, integration, and decision support across data-rich domains.</p> Alex Kaplunovich Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4142 Thu, 04 Dec 2025 00:00:00 +0000 Preliminary Study of TexAI: Where Adaptive AI Reimagines Law Enforcement Training https://papers.academic-conferences.org/index.php/icair/article/view/4361 <p>Law enforcement agencies today operate at the frontline of data-sensitive decision-making, yet their training<br />systems remain alarmingly analog. This gap has far-reaching consequences: The Police Department unintentionally deleted<br />over eight terabytes of digital evidence, affecting nearly 17,000 criminal cases and causing significant public backlash and<br />judicial delays (NBC 5 Dallas-Fort Worth, 2019). The root of this crisis lies not in technology alone, but in an outdated training<br />paradigm that fails to prepare officers for the ethical, operational, and procedural demands of an AI-driven society. This<br />paper explores how adaptive, explainable AI (XAI) can reframe the relationship between law enforcement and digital<br />governance. We present TEXAI (XAI-powered Knowledge Base for Texas Law Enforcement), an AI-powered prototype built<br />to modernize cybersecurity training in policing. Developed through user interviews and field research, the app combines<br />real-time regulation updates with personalized, scenario-based microlearning-targeting a key challenge: officers forgetting<br />or misunderstanding complex, evolving legal protocols. Our research examines how integrating XAI principles into law<br />enforcement workflows introduces not only technological efficiency but critical epistemological transparency, fostering<br />institutional accountability. We situate this intervention in the broader context of AI's role in public-sector transformation,<br />arguing that ethical deployment of adaptive systems is essential to restoring public trust and preventing catastrophic human<br />error. TEXAI also functions as a case study for how context-aware, role-specific AI tools can evolve through participatory<br />design-responding to both human vulnerability and structural inefficiency. We contrast our solution with existing national<br />systems such as PoliceOne Academy and Axon Academy, highlighting a novel intersection between AI explainability, justice<br />system integrity, and digital literacy. The implications extend beyond law enforcement: in demonstrating how adaptive AI<br />can personalize and democratize professional training in real time, we propose a scalable model for AI's responsible<br />integration into high-stakes, socially critical domains. This work contributes to growing discourse around ethical AI, resilience<br />in digital infrastructure, and the future of labor in AI-mediated institutions.<br /><br /></p> Shreyas Kumar, Charvi Vohra, Manas Rai, Arya Singh Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4361 Thu, 04 Dec 2025 00:00:00 +0000 AI and Automation: Effects on Employment and Management https://papers.academic-conferences.org/index.php/icair/article/view/4387 <p>The rapid rise of Artificial Intelligence (AI) and automation is transforming industrial operations, reshaping job<br>roles, and redefining management strategies, ultimately reshaping the structure of employment and redefining managerial<br>practices across industries. This paper examines the nuanced impact of AI-driven automation on labor markets, workforce<br>dynamics, and organizational management, drawing on interdisciplinary research from economics, computer science, and<br>business studies. We examine three key dimensions: (1) the displacement and transformation of job roles due to intelligent<br>systems, (2) the evolution of managerial decision-making empowered by AI-based analytics and predictive modeling, and (3)<br>the emergence of hybrid human-AI collaboration paradigms within enterprises. Our analysis integrates case studies from<br>sectors undergoing rapid AI integration, such as manufacturing, healthcare, and logistics, highlighting both job obsolescence<br>and opportunities for upskilling and task augmentation. The paper also examines the ethical and strategic implications of<br>managing an AI-enabled workforce. These include algorithmic transparency, bias mitigation, labor reallocation policies, and<br>the design of AI governance frameworks. We identify managerial challenges in adapting to a dual-human-machine<br>environment, including shifting leadership roles, redefining performance metrics, and maintaining employee trust amid<br>technological change. Using empirical labor market data and organizational surveys, we propose a typology of employment<br>impact, ranging from automation-intensive displacement to augmentation-driven productivity gains. We argue that the<br>future of work depends not only on technological capability but on proactive policy, inclusive design, and agile management<br>strategies. Our findings underscore the urgent need for interdisciplinary collaboration in crafting equitable AI transitions. We<br>conclude with recommendations for policymakers, business leaders, and educators to ensure that AI serves as a catalyst for<br>sustainable and inclusive growth, rather than as a force for division and dislocation.<br><br></p> Shreyas Kumar, Saptarishi Das, Apoorv Agrawal, Dipshikha Shaw Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4387 Thu, 04 Dec 2025 00:00:00 +0000 Cracking the Chip: AI-Powered Security for Semiconductor Threats https://papers.academic-conferences.org/index.php/icair/article/view/4287 <p>Semiconductor supply chains have become increasingly vulnerable to sophisticated, low-level threats that originate in the early phases and propagate undetected across various stages of semiconductor device production. As semiconductor systems grow more complex and globally interconnected, these low-level design threats present significant risks, including data breaches, system failures, and long-term erosion of reliability. This paper presents a comprehensive AI-driven framework to detect, model, and mitigate hardware security threats across the semiconductor supply chain, from design and fabrication to assembly. We begin with the design phase, illustrating how vulnerabilities like hardware Trojans in third-party IP blocks, compromised EDA scripts, and speculative execution side-channels can be exploited. AI techniques, such as anomaly detection for logic integrity, dynamic hashing for secure script flows, and entropy-based instruction shuffling, are shown to proactively block or obfuscate these attacks. These models serve as templates for following stages, including fabrication (tampered masks or altered process flows), assembly and packaging (hardware fingerprinting), and post-silicon validation (malicious test routines or data exfiltration). Our contributions include a stage-wise breakdown of threat surfaces across the supply chain and the design of threat models with corresponding AI-driven defenses that analyze patterns, enforce trust boundaries, and obfuscate system behavior. Additionally, to assess the viability of these defenses, we outline a validation framework involving simulated and prototyped defenses, which include instruction shuffling, JTAG interface monitoring, and machine learning-based fault pattern analysis. Proposed evaluation metrics include detection accuracy, computational overhead, entropy of runtime traces, and classification accuracy. By addressing persistent security threats early and continuously through the chip lifecycle, we aim to leverage AI to shift hardware security from reactive patching to proactive risk management. Our work emphasizes the importance of securing semiconductor systems at their root, offering a path toward proactive hardware security and highlights the need for scalable, interdisciplinary solutions at the intersection of AI, hardware design, and supply chain security.</p> Shreyas Kumar, Shruti Oruganti, Isha Virk Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4287 Thu, 04 Dec 2025 00:00:00 +0000 Teaching Bayesian Reasoning as a Pathway Toward Active Thinking and Explainable AI https://papers.academic-conferences.org/index.php/icair/article/view/4325 <p>Decision-making under uncertainty requires not only computational tools but also critical thinking skills that allow individuals to evaluate assumptions, weigh evidence, and mitigate automation bias. While many contemporary AI systems operate as opaque black-box models, Bayesian Networks (BNs) provide a transparent and explainable alternative, making them well-suited for both decision support and AI education. This paper introduces an educational framework where learners construct, parameterize, and interpret Bayesian models to address authentic problems, such as classifying suspicious emails in cybersecurity. By explicitly modelling variables, dependencies, and prior assumptions, BNs engage students in probabilistic reasoning while promoting metacognitive reflection and critical evaluation of their decision-making process. The contribution of this work is threefold: (1) it positions Bayesian Networks as both a mathematical reasoning tool and an accessible entry point into explainable AI; (2) it integrates probability theory, critical thinking, and transparency into a unified framework for Responsible AI education; and (3) it demonstrates how transparent reasoning can support human-in-the-loop decision-making and reduce automation bias. While the framework does not claim to solve the general challenges of explainability in complex AI models, it offers a concrete and transferable pathway for cultivating active thinkers capable of designing, interpreting, and questioning AI-assisted decisions.</p> Dimitrios Lappas, Panagiotis Karampelas, Giorgos Fesakis Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4325 Thu, 04 Dec 2025 00:00:00 +0000 Undergraduate Business Curricula and AI in the Workplace https://papers.academic-conferences.org/index.php/icair/article/view/4383 <p>As AI adoption accelerates in organizations and business operations, it is important that higher education continue to study its impact on teaching and learning. Following the formal introduction of two AI applications at a 23-campus university system, this work-in-progress research aimed to understand business faculty pedagogic response to AI and employer needs at one institution. Given that curricula design involves multifactorial thinking, this report is part of a larger study involving a sequenced set of data analyses. On the higher education side, for this study, current syllabi of courses in the General Management concentration in the undergraduate business program were analyzed. General Management was the most popular business concentration last year. As the technology continues to evolve, any study provides only a snapshot in time. Given higher education’s role in the economic sector, at this transformative juncture, modernizing academic understanding of the workplace needs to examine pedagogical practice relating to graduate preparation for new opportunities and challenges. </p> Sweety Law Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4383 Thu, 04 Dec 2025 00:00:00 +0000 Rethinking Holistic AI Development Through Social Diversity, Interdisciplinary Collaboration and Integrative Knowledge Production https://papers.academic-conferences.org/index.php/icair/article/view/4173 <p style="font-weight: 400;">The rapid deployment of AI reveals persistent socio-technical and data-driven biases that reflect profound epistemic limitations in knowledge production. These biases are not accidental, but symptomatic of deeper epistemic limitations in the way AI knowledge is produced — often by homogeneous teams within technocentric paradigms that exclude alternative perspectives. This paper argues that the underrepresentation of diverse social actors in AI development not only perpetuates inequality, but also severely limits the epistemic and ethical robustness of AI systems. The focus of this paper arises in particular from the preliminary findings obtained in the Horizon Europe project STEP, which highlight the potential of the framework to improve the inclusivity and trustworthiness of AI. The central thesis is that social diversity must be considered as an epistemic condition and not just an ethical or demographic ideal. Drawing on sociology, psychology and educational science, the authors show how integrating plural forms of knowledge, lived experiences and cultural perspectives into the design and development process can lead to AI systems that are more context-sensitive, equitable and trustworthy. Rather than proposing inclusion as an external corrective, this paper discusses a paradigm shift in AI development - a paradigm shift that embeds diversity into the infrastructure of knowledge production itself. The contribution of this paper is twofold. First, it proposes a theoretical model of integrative knowledge production that identifies mechanisms through which interdisciplinary collaboration can challenge dominant epistemologies and promote systemic reflexivity. Second, a participatory design framework is outlined to operationalise this model through concrete methodological tools, including dialogic co-design workshops, ethnographic participation in data selection and cross-functional team structuring. These practises aim to break through technocratic compartmentalisation by creating space for social critique and situated intelligence within AI development cycles. Finally, the authors reflect on the transformative potential of this approach and suggest that rethinking who is involved in AI knowledge production will not only change the outcomes of AI systems, but also the normative foundations of the technological future. From this perspective, ethical AI is not just explainable or compliant — it is structurally inclusive, responsive to different lifeworlds and open to critical reinvention.</p> Cinzia Leone, Angela Celeste Taramasso, Anna Siri Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4173 Thu, 04 Dec 2025 00:00:00 +0000 A Gamified Phishing Simulator Using Reinforcement Learning https://papers.academic-conferences.org/index.php/icair/article/view/4339 <p>An organisation's security is fundamentally reliant on its people. Regardless of the sophistication of its cybersecurity infrastructure, the absence of comprehensive training and awareness can lead to vulnerabilities. Traditional phishing awareness training typically involves sending simulated phishing emails to employees, allowing organisations to monitor actions such as link clicks, email reporting, and responses. While this method offers valuable insights into employee behaviour, it often struggles to engage users effectively. This conventional approach may not create a dynamic learning environment conducive to better retention of vital security practices. Furthermore, users generally do not receive immediate feedback regarding their interactions with phishing links, leaving organisations more susceptible to social engineering attacks. This research seeks to address the issue by developing an interactive gamified phishing simulator that employs reinforcement learning (RL). The methodology for this study consists of two key components. First, a literature review was conducted to assess existing phishing awareness techniques and explore how RL can be applied effectively. This review examined the integration of RL within cybersecurity education and explored the impact of gamification on user behaviour. For the RL agent, a dataset comprising both phishing and legitimate emails was compiled. The agent was then trained to discern phishing emails from legitimate ones based on various email features.&nbsp; Then the agent presents users with email challenges and delivers real-time feedback on their selections. The simulator incorporates a reward and badge system that promotes active participation and ongoing learning. This approach aims to overcome the limitations traditionally associated with static phishing training by fostering continuous learning, ultimately reducing user susceptibility. The effectiveness of the proposed simulator was evaluated based on its classification accuracy of phishing and legitimate emails.</p> Keana Leong, Noluntu Mpekoa Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4339 Thu, 04 Dec 2025 00:00:00 +0000 Improving Online Learning via VARK Learning Styles and Machine Learning-Driven Personalisation https://papers.academic-conferences.org/index.php/icair/article/view/4357 <p>Understanding student engagement and academic performance is crucial in online learning environments. However, many learning management systems (LMS) lack mechanisms to adapt to diverse learning styles and support meaningful collaboration. This study addresses these challenges by proposing a Personalised and Collaborative Learning Experience (PCLE) framework that integrates the Visual, Auditory, Reading/Writing, and Kinaesthetic (VARK) learning style model with collaborative filtering techniques. Unlike existing approaches that rely only on rating data, PCLE incorporates personalised learning styles into the recommendation process to create learner-centred outcomes. To overcome the lack of publicly available datasets containing personalised learning style data, a self-collected dataset was developed to reflect authentic learner preferences. Benchmark datasets from Coursera and Udemy were also used to validate baseline collaborative filtering performance. Three machine learning models—K-Nearest Neighbours (KNN), Singular Value Decomposition (SVD), and Neural Collaborative Filtering (NCF)—were applied and evaluated using Mean Absolute Error (MAE), Hit Rate (HR), and Average Reciprocal Hit Ranking (ARHR). Results from the benchmark datasets confirmed earlier findings that KNN performs well on structured review data, while the self-collected dataset demonstrated the added value of integrating learning styles. The self-collected dataset was evaluated separately, incorporating personalised learning styles into the recommendation process. This dataset represents the main contribution of the study, as VARK preferences were embedded alongside course interactions to extend recommendations beyond standard rating-based methods. This study highlights how personalisation and collaboration can be integrated into one framework to enhance learner engagement. While the same models were applied, embedding learning preferences into the self-collected dataset represents a methodological enhancement rather than a direct comparison with existing datasets. Findings highlight the role of dataset characteristics in shaping both accuracy and ranking quality and show how the PCLE framework balances personalisation with collaboration to support learner-centred outcomes. Future research should expand dataset diversity, include additional learner attributes, and explore advanced recommendation models to further optimise adaptability and performance.</p> Sook Ling Lew, Claireta Weiling Tang, Shih Yin Ooi Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4357 Thu, 04 Dec 2025 00:00:00 +0000 AI-Enhanced Stakeholder Engagement for Climate Adaptation: Evidence from Lithuania https://papers.academic-conferences.org/index.php/icair/article/view/4252 <p>Civil engineering faces the dual challenge of decarbonisation and resilience under increasing threat of climate change. While artificial intelligence (AI), machine learning, and digital twins are increasingly applied to optimise design, material reuse, and hazard modelling, most systems remain techno-centric and overlook the human dimensions of adaptation. This article addresses this gap by combining a nationally representative survey of Lithuanian residents (n = 1,013, 2023) with the design of an AI-enabled platform for civil engineering adaptation. The survey captured six domains (hazard experiences, adaptation behaviours, motivational drivers, preparedness levels, institutional linkages, and climate attitudes) providing a behavioural evidence base that reveals how climate concerns and motivations translate into action. The results highlight differentiated motivational pathways, moderate levels of preparedness, uneven institutional communication, and four distinct citizen profiles with specific adaptation probabilities. Building on these insights, the article proposes the Citizen-informed AI for Climate Adaptation (CiA-CA) framework, which systematically maps citizen evidence onto AI system design variables. The framework informs the development of the Lithuanian Construction Materials Reuse Optimization (LSEPO) platform, created under the Civil Engineering Research Centre (CIMC), by integrating hazard-prioritised digital twins, recommender systems with motivational weighting, clustering for personalisation, and preparedness-aware interfaces. Conceptually, CiA-CA advances the integration of behavioural adaptation evidence with socio-technical AI design. Empirically, it provides one of the first nationally representative datasets on climate adaptation behaviours in the Baltic region. Practically, it offers a blueprint for municipalities and industry partners in Lithuania to embed citizen evidence into AI-enabled platforms, with potential transferability to similar European contexts.</p> Monika Maciuliene Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4252 Thu, 04 Dec 2025 00:00:00 +0000 Explaining Consumer Preferences for AI- and Human-Authored Books: An Explanatory Experimental Study https://papers.academic-conferences.org/index.php/icair/article/view/4359 <p>Books—both fiction and nonfiction—are among the first consumer goods that artificial intelligence (AI) can fully generate at a quality indistinguishable from that created by humans. While prior research in domains such as art and music documents a preference bias against AI-created works, the transferability of these findings to typical consumption contexts—and whether such biases persist amid rapid AI diffusion—remains unclear. This study examines the effect of authorship disclosure on consumer preferences for AI- versus human-authored books, while also exploring psychological and economic mechanisms that may explain these differences. It further investigates heterogeneity of this effect by factors such as familiarity with AI to anticipate future states of AI adoption. For empirical evaluation of the effects, we conducted an online experiment that systematically manipulated authorship of books from multiple literary genres, using a representative sample of approximately 1,500 U.S. adults. Results showed a consistent preference bias against AI-generated books across genres. Among the potential explanatory mechanisms, perceived author effort, emotional attachment, and perceived proximity to the author emerged as the most influential. Heterogeneity analysis among early adopters and AI welcomers revealed that these effects are weakened but remained persistent.&nbsp;The findings indicate that consumers’ reluctance towards AI-created books arises not only from assessments of product utility but also from psychological factors. Although technological advances may alleviate consumer quality concerns, persistent social and emotional attachments are likely to result in separate market segments for AI- versus human-created works.</p> Vladimir Manewitsch, Alexa Kalb Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4359 Thu, 04 Dec 2025 00:00:00 +0000 Mapping the Research Landscape of Transparent AI in University Assessment: A Bibliometric Investigation https://papers.academic-conferences.org/index.php/icair/article/view/4117 <p>This study presents a systematic bibliometric investigation of AI transparency research in university assessment contexts. Following GLOBAL recommendations (Ng et al., 2024), we examined 72 peer-reviewed publications from Scopus (2019-2025) using performance metrics and science mapping techniques. Findings reveal exponential growth from single publications in 2019 to 26 documents in 2024 (R² = 0.7666, p = 0.0098). The domain generated 655 citations achieving h-index of 13. China leads in output (8 documents) while Sweden demonstrates highest citation efficiency (25.50 citations per document). Science mapping identifies four primary clusters: technical transparency methodologies, educational analytics frameworks, machine learning applications, and performance prediction systems. Co-citation analysis establishes Adadi and Berrada’s XAI survey (2018) as the foundational framework (11 citations, 24 total link strength). Temporal evolution shows progression from basic concepts toward practical implementations, reflecting regulatory compliance following GDPR and EU AI Act. International collaboration reveals South-South partnerships and high-impact contributions from countries with strong data protection frameworks. These patterns provide evidence for an emerging interdisciplinary domain addressing AI accountability in higher education, offering insights for researchers, practitioners, and policymakers.</p> Flavio Manganello, Alberto Nico, Giannangelo Boccuzzi Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4117 Thu, 04 Dec 2025 00:00:00 +0000 Sensory Characterization of Mezcal Using Free Choice Profiling and Machine Learning Tools https://papers.academic-conferences.org/index.php/icair/article/view/4381 <p>This study characterized the sensory profiles of four regional mezcals by integrating free-choice profiling with machine learning. A total of 1,148 consumers across four Mexican cities described the samples using their own vocabulary and rated attribute intensities. Machine learning analysis revealed significant correlations between sensory descriptors and demographic variables—such as age, gender, and origin—identifying detailed consumer preference patterns. The findings enable producers to customize products for specific market segments, enhancing their commercial strategies. By supporting traditional producers, this research also promotes sustainable economic growth in rural communities, aligning with the objectives of SDG 8.</p> Antonieta Martinez-Velasco, Sergio Erick García Barrón, Claudia Ariadna Acero-Ortega, Socorro Josefina Villanueva Rodríguez, Enrique J. Herrera López Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4381 Thu, 04 Dec 2025 00:00:00 +0000 The Ethics and Security Risks of AI Note-Takers in the Workplace https://papers.academic-conferences.org/index.php/icair/article/view/4366 <p>The rapid adoption of Generative Artificial Intelligence (AI) has led to its deep integration into daily work<br />environments, with AI note-takers emerging as a popular tool for transcribing and summarizing meetings using advanced<br />Natural Language Processing (NLP). While these systems enhance workplace efficiency by providing real-time transcripts and<br />concise summaries, they also introduce significant cybersecurity and ethical risks. Chief among these are questions of data<br />ownership, third-party data sharing, and compliance with consent laws. In jurisdictions with two-party or all-party consent<br />requirements, the deployment of AI note-takers raises legal challenges when participants are not fully informed or have not<br />consented to being recorded. Moreover, consent mechanisms across AI platforms are inconsistent and often buried in<br />complex disclosures, leading to uninformed use and potential violations of privacy. These risks are compounded by the<br />opaque data practices of many AI companies, which may retain or monetize recorded conversations. This paper conducts a<br />comprehensive analysis of privacy policies from leading AI note-taker providers, legislative frameworks across U.S. states,<br />academic literature, and recent investigative reports. We identify key policy gaps and technical oversights that could<br />compromise user trust, organizational data security, and regulatory compliance. Case studies are presented to illustrate both<br />the productivity gains and the harms—such as unauthorized data exposure—that result from AI note-taker misuse. We<br />conclude by offering targeted recommendations for policymakers, developers, and organizational decision-makers to<br />mitigate ethical and security risks. These include harmonizing consent practices, enhancing user transparency, and enforcing<br />stricter data governance standards. Our findings aim to promote responsible innovation and ensure that the deployment of<br />AI note-takers is aligned with ethical principles and privacy rights in modern workplaces.</p> Jacob McCarthy, Skyler Sax, Justice Ishio, Justin Gonzalez, Shreyas Kumar Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4366 Thu, 04 Dec 2025 00:00:00 +0000 Artificial Intelligence as a Tool in Cognitive Warfare on Digital Platforms https://papers.academic-conferences.org/index.php/icair/article/view/4353 <p>Cognitive warfare uses selective framing, AI and ICT to manipulate cognition, exploit vulnerabilities, and influence beliefs and decisions. AI supports both offensive and defensive cognitive warfare, feeding into kinetic warfare through psychological tactics. Cognitive warfare is immersive and long-term, often unnoticed, even via trusted actors. As AI and ICT evolve, unpredictable uses will emerge, raising ethical, legal, and moral challenges that demand multidisciplinary research. Future warfare will involve complex trade-offs, while democracies must prevent adversaries from weaponizing these tools—by using them strategically themselves.</p> Niina Meriläinen Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4353 Thu, 04 Dec 2025 00:00:00 +0000 Patterns of Adoption and Learning: Students’ Relationships with Generative AIs https://papers.academic-conferences.org/index.php/icair/article/view/4273 <p>Generative AIs have become part of increasingly intimate relationships between humans and technologies, and have created both serious concerns and heightened pedagogical interest in higher education. However, though interest in generative Ais and their contribution to education is spreading, little is still known about how they are used by students in their everyday lives and how this affects education. In this paper we investigate how GAIs such as ChatGPT enter students’ lives and become part of their learning. The paper draws on observations and interviews with students in a Master Program where students partnered with ChatGPT to investigate concepts and philosophical aspects of technology. The aim of the study was to understand how ChatGPT could support students’ collaborative group work as part of problem-based learning practices. As our study involved investigations of students’ everyday uses of generative AIs as well as their learning we were able to make connections between students’ learning strategies and their emerging experiences with GAI technologies. In the paper we investigate these relationships focusing on how GAIs are enrolled and participate in students’ lives and in collaborative learning contexts where uses and understandings of GAI technologies are negotiated in group sessions. Our data suggest that some students enter education with extensive experiences with generative AIs and others commence their engagement after meeting these technologies through education. What characterizes these patterns of adoption - and what are their effects on learning? Theoretically, we draw on sociomaterial approaches to understand GAIs as material agents in students’ lives and in education building on the concepts of <em>patterns of relations</em> and <em>distributed agency</em>. These concepts emphasize the collaborative relationship of humans and GAI technologies, underlining both the specifics of GAIs as ‘human-like’ agents and the blurring of agencies and authorship involved in e.g. AI-generated writing.</p> Bente Meyer, Ulla Højmark Jensen, Sara Paasch Knudsen Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4273 Thu, 04 Dec 2025 00:00:00 +0000 AI Crossroads of Security, Ethics, and Education: A Conceptual Framework for Responsible Adoption https://papers.academic-conferences.org/index.php/icair/article/view/4297 <p>In an era where artificial intelligence permeates every aspect of our lives, we find ourselves at a pivotal intersection of security, ethics, and education. This multidimensional framework invites us to explore the profound implications of AI and encourages a responsible, thoughtful approach to its integration. By prioritizing security and ethical considerations, we can unlock the transformative potential of AI while fostering a culture of responsible innovation in education. This framework serves as a crucial guide for navigating the complexities of AI adoption, ensuring that we harness its power for the betterment of society. This paper proposes a cross-sectoral framework that integrates security, ethics, and education to support the responsible, reliable, and equitable adoption of AI in small and medium-sized enterprises (SMEs) and educational institutions. As artificial intelligence continues to rapidly integrate into critical domains such as cybersecurity and education, a comprehensive approach is essential to address the unique risks and opportunities. This paper presents a cross-sectoral analysis of AI adoption in SMEs and education, focusing on cybersecurity posture, ethical considerations, and the challenges associated with integrating responsible AI within the enterprise. Drawing on recent research, we evaluate how AI-enabled threat detection and response can empower resource-constrained SMEs and educators while also highlighting emerging risks related to data privacy, model transparency, and algorithmic bias. Additionally, we examine the increasing use of generative AI tools within K–12 and higher education, identifying both pedagogical and ethical implications for curriculum development and digital literacy. The paper advocates for flexible governance and large language model training to facilitate the ethical deployment and use of AI across both the private sector and educational institutions. This framework will provide guidance for policymakers, educators, and technology leaders as they strive to strike a balance between innovation and responsible stewardship of AI.</p> Kasey Miller, Jake Townsend, Minoo Modaresnezhad, Corina White Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4297 Thu, 04 Dec 2025 00:00:00 +0000 Mapping Moves, Modes and Methods: Designing Socratic Conversational Agents for AI-Enhanced Learning https://papers.academic-conferences.org/index.php/icair/article/view/4380 <p>Conversational agents powered by Large Language Models (LLMs) are increasingly proposed as scalable tools for personalised learning support. Yet much existing research focuses on algorithmic capability rather than the nuanced human learning dialogues that shape educational practice. This leaves a gap in empirically informed frameworks for translating rich instructional conversation into actionable design principles for Socratic AI partners. This paper addresses this need through a secondary analysis of live, online design critiques conducted at a South African university - an environment reflective of many Global South contexts marked by resource constraints, student diversity, and socio-economic pressure. Building on the author’s doctoral research, the study synthesises previously collected empirical material, including surveys, a focus-group interview, and recorded critique sessions. A composite conceptual lens (Conversation Theory, Experiential Learning Theory, and Cognitive Apprenticeship) guided the interpretive analysis. The findings identify four recurring student–tutor relationship archetypes and four interaction dimensions that position critiques as formative, iterative, formal, and immersive. These insights are consolidated into a “moves–modes–methods” matrix that captures how knowledge is negotiated, feedback is scaffolded, and agency is fostered in the critique space. Mapping this matrix onto current scholarship on LLM-based tutors reveals both alignments, such as the value of probing questions, and tensions related to contextual sensitivity, including bandwidth limitations, student diversity, and socio-economic realities. By integrating detailed empirical insight with emerging work on AI-supported learning, the study offers an evidence-based framework to inform the design of conversational agents that augment human expertise while preserving the pedagogical integrity of the online critique in under-resourced, highly diverse settings.</p> Jolanda Morkel Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4380 Thu, 04 Dec 2025 00:00:00 +0000 Exploring Generative AI for Personalized Engagement in Therapy Adherence Contexts https://papers.academic-conferences.org/index.php/icair/article/view/4211 <p>This study explores the use of generative AI to create personalized virtual companions, referred to as "Tamagotchis," aimed at fostering emotional engagement and supporting therapy adherence. A prototype system was developed that generates individualized 2D image sequences representing symbolic objects or characters in different states, such as weakened, neutral, or improved. Through a mixed-methods evaluation involving a quantitative survey (n=78) and qualitative interviews (n=10), the study assessed the visual clarity, emotional resonance, and perceived relevance of the generated images. Results indicate that personalized objects were more emotionally engaging than predefined versions, and user involvement in the creation process significantly enhanced identification and motivation. These findings suggest that emotionally adaptive virtual companions may hold promise as a supportive element in long-term therapeutic interventions.</p> Mirella M. Moser, Nina Lauria, Thomas Keller Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4211 Thu, 04 Dec 2025 00:00:00 +0000 AI Generated Images for Education and Work Life: On Bias and Guardrails https://papers.academic-conferences.org/index.php/icair/article/view/4106 <p>The rapid multimodal development of Generative AI (GenAI) tools has opened up possibilities for content creation in many fields. This paper presents a study that had a focus on image generation for educational contexts and professional development in a course on artificial intelligence for education and work life. In an assignment, participants used different GenAI tools for image generation. Moreover, course participants analysed and discussed their AI generated images in essays. In the wide variety of GenAI tools two different image generation software were suggested, one from a big established IT company and the other from a small independent software developer. Both these tools were chosen because they are free to use without any licence fees, and that there are not any complex login procedures before using them. This was seen as important criteria for a group of course participants with relatively low pre-knowledge of GenAI and image generation. On the other hand, course participants with earlier experience of were allowed to use other and more advanced tools. &nbsp;Images could be generated individually or in groups, but the final analysis essays had to be written individually without any AI assistance. Four portraits should be generated with each tool of two world-wide well-known persons, one locally known person and a portrait of the course participant that wrote the essay. 61 essays were thematically analysed with the use of open coding and axial coding. Results were divided into the categories of Age, Gender and Ethnicity, Language, Guardrails, and Training data, with Training data as the central or axial category. Findings show that the results to a certain degree are depending on prompting and the language, but that the found bias is depending on the training data.</p> Peter Mozelius Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4106 Thu, 04 Dec 2025 00:00:00 +0000 Interpersonal Trust Development in GenAI-augmented Organisations https://papers.academic-conferences.org/index.php/icair/article/view/4310 <p>Generative artificial intelligence (GenAI) is being increasingly adopted across organisations, where its integration into work has been shown to significantly enhance efficiency and productivity. However, GenAI's use introduces greater uncertainty about the reliability and quality of work. This uncertainty, combined with potential changes in social interaction as an outcome of GenAI use, may directly impact interpersonal relationships, especially trust, among employees. Yet, remarkably few studies explore GenAI’s impact on interpersonal relationships within organisations. This study, therefore, seeks to explore the impact of GenAI on interpersonal trust in organisations that have integrated GenAI to assist in the conduct of work, referred to here as GenAI-augmented organisations. In this study, we apply the organisational trust model, defining trust as the willingness to be vulnerable in response to perceived trustworthiness based on evaluated ability (skills and competencies), integrity (adherence to shared values and norms) and benevolence (concern for others). We explore how this response evolves in the context of GenAI use. We conducted nine qualitative semi-structured interviews in April-May 2025 with managers from knowledge-intensive, GenAI-augmented organisations. Our findings suggest that in GenAI-augmented organisations, managers tend to place greater trust in employees demonstrating ability requiring higher cognitive effort, such as critical GenAI use, asking questions, understanding, and explaining GenAI outputs. Integrity, described through the manner of GenAI use, particularly by demonstrating responsibility, maintaining transparency, providing evidence, and aligning with organisational policies, is also critical for developing interpersonal trust. Moreover, uncritical GenAI use that may burden others with more work can lead to a reduction in trust. In response to uncertainty, managers often increase supervision; however, this is not necessarily a sign of distrust but a strategy to manage uncertainty. To our knowledge, our study is one of few qualitative studies exploring GenAI use in organisations. It provides a novel perspective that connects GenAI and interpersonal relationships. The findings have implications for management practices, organisational culture, and aligning GenAI to enhance trust and collaboration within organisations.</p> Svetlana Norkin, Kathrin Kirchner Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4310 Thu, 04 Dec 2025 00:00:00 +0000 AI Driven Cyber Deception in FinTech: An Adaptive Defense Strategy https://papers.academic-conferences.org/index.php/icair/article/view/4365 <p>FinTech platforms clear high-value transactions in milliseconds, making them lucrative targets for adversaries who increasingly weaponize artificial intelligence. Once an attacker bypasses the perimeter, via credential stuffing, supply-chain malware, or deep-fake social engineering, traditional defenses often alert too late to prevent loss. We present an Adaptive Deception Defense Framework (ADDF) that intertwines AI-orchestrated honeypots, honeytokens and decoy micro-services within everyday banking and payment workflows. A recurrent-neural threat profiler classifies live attacker behavior; a Proximal-Policy-Optimization agent then selects actions such as spawning a shadow login API, cloning a database or injecting synthetic ledgers, thereby misdirecting intruders while harvesting telemetry. In a controlled “FinBank” test-bed featuring a vulnerable Flask-and-MySQL stack, ADDF shortened mean time-to-detect from 3 min 42 s to 29 s, increased attacker dwell-time inside decoys to 12 min 18 s, and prevented all real data exfiltration across ten attack trials. False-positive alerts remained below 1% per run, and added resource use averaged 14% CPU/RAM on mid-range servers. The framework also produced high-fidelity indicators of compromise, password lists, malware binaries and lateral-movement scripts, that would have been unavailable under baseline controls. These findings indicate that AI-driven cyber deception can transform FinTech security from passive monitoring into proactive engagement, mitigating breach impact while supplying rich threat intelligence. The paper details system architecture, reinforcement-learning policy training, empirical evaluation and operational implications—showing how defenders can regain initiative in the AI-to-AI cyber arms race without disrupting legitimate customers or breaching regulatory duties. FinTech platforms process high‑value transactions at internet speed, making them prime targets for advanced cyber‑criminals who now weaponize artificial intelligence. Traditional controls detect many incidents yet remain reactive; once adversaries bypass the perimeter, defenders struggle to contain damage fast enough to prevent data loss or fraud. We present an AI‑driven cyber‑deception framework that inserts a dynamic layer of honeypots, honeytokens and decoy services into a live FinTech environment. A learning engine classifies attacker behaviour in real time, then deploys or adapts decoys to misdirect adversaries while capturing rich telemetry. In a controlled banking testbed, the system cut mean time‑to‑detect from minutes to seconds, confined intruders to fake assets in every trial, and prevented exfiltration of real customer data. Adaptive deception also generated high‑quality threat intelligence with negligible false positives and modest resource overhead (&lt;15% CPU/RAM on a mid‑range server). Findings from this study suggest that the use of AI-powered deception methods can shift or otherwise redirect Fintech defence posture from passive monitoring configurations to that of proactive engagement. This can reduce risk for defensive teams and potentially boost incident-response agility without significantly disrupting legitimate users within a system where the novelty lies in orchestrating established AI components (RL, anomaly classification, and generative decoys) into a closed-loop deception system for real-time FinTech operations.</p> Isaac Ojeh, Xavier Palmer, Lucas Potter Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4365 Thu, 04 Dec 2025 00:00:00 +0000 Adaptive AI Sentinels Against Phishing Attacks: Democratizing Cybersecurity Through Interactive Learning https://papers.academic-conferences.org/index.php/icair/article/view/4373 <p>Phishing attacks have become more convincing as generative AI enables attackers to create polished,<br />context-aware emails that closely resemble legitimate communication. These messages often evade traditional filters that rely on surface features and leave users without a clear understanding of why a message may be harmful. This work introduces an adaptive phishing-detection system that uses natural language processing to model semantic, linguistic, and stylistic signals and produce a risk score indicating how phish-like or benign an email appears. A complementary large language model layer then performs contextual and intent-based reasoning to interpret the deeper meaning of the message and detect subtle social engineering cues. The system incorporates adversarial and prompt-safety checks to strengthen reliability against AI-generated threats and through a web app, it delivers short micro-lessons for each detection, helping users understand the psychological tactics involved and learn to recognize them in future messages. This research contributes to both cybersecurity and NLP by showing how semantic scoring and LLM-based reasoning can be operationalized together to counter<br />AI-enabled social engineering while remaining interpretable for non-expert users. By combining accurate detection with continuous user education, the proposed solution strengthens trust, awareness, and long-term resilience, offering a scalable defense mechanism for modern phishing attacks.</p> Rishabh Pagaria, Jason Xiong, Ruihong Huang, Shreyas Kumar Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4373 Thu, 04 Dec 2025 00:00:00 +0000 AI for Social Media Summaries: An Encoder-Decoder Transformer System vs ChatGPT https://papers.academic-conferences.org/index.php/icair/article/view/4210 <p>In recent years, automatic text summarization has become a vital area of research due to its role in improving access to and understanding of vast information across domains. The rise of social media has intensified the need for summarization tools capable of handling user-generated content such as posts, comments, and discussions. Unlike structured texts, social media content is often informal, fragmented, context-dependent, and noisy. It frequently includes slang, abbreviations, emojis, and diverse writing styles, posing unique challenges for traditional summarization methods. While conventional approaches perform well on formal text, they often struggle to capture the nuances of online discourse. This highlights the need for specialized models that can generate coherent and context-aware summaries tailored to the characteristics of social media language. Recent advances in neural architectures, particularly Transformer-based sequence-to-sequence models, have shown promise in overcoming these challenges. These models excel at capturing long-range dependencies and contextual relationships, making them well-suited for summarizing dynamic and unstructured inputs. Despite technical progress, evaluating the quality of summaries remains difficult. Standard metrics like ROUGE may not fully reflect subjective qualities such as fluency, coherence, and semantic fidelity, which are essential for human-like summarization. This paper introduces a Transformer-based summarization system designed specifically for social media comments related to topical posts. We benchmark its performance against models like ChatGPT, assessing outputs across multiple linguistic and semantic dimensions. By combining both traditional and advanced evaluation metrics, our work provides a more holistic view of summarization quality and identifies key areas for future improvement.</p> Afrodite Papagiannopoulou, Chrissanthi Angeli, Panagiotis Makrigiannis Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4210 Thu, 04 Dec 2025 00:00:00 +0000 Adoption of Artificial Intelligence and Base Technologies of Industry 4.0: A Way to Improve Business Processes https://papers.academic-conferences.org/index.php/icair/article/view/4384 <p>Our article examines selected technologies associated with the Fourth Industrial Revolution (specifically the so-called base technologies) and their extent of utilization within business processes. The research was conducted on a sample of companies operating in the Slovak Republic. Based on data from 180 companies, we assessed the extent of implementation of these technologies. The results of our study show that some Industry 4.0 technologies have already been successfully implemented in most of the surveyed companies (in particular cloud computing for data storage, cloud networks for remote access to resources, and the Internet of Things). An especially positive finding is that cybersecurity measures have been implemented to the greatest extent, which is crucial for the successful and sustainable functioning of the entire Industry 4.0, with the importance of this area continuing to grow each year. On the other hand, artificial intelligence, machine learning, and deep learning are not widely used in most companies, nor do companies currently plan to adopt them. This field, however, provides significant potential for productivity growth and process optimization, and in recent years we have also observed a marked increase in the adoption of artificial intelligence within the general population. At present, only 14% of companies have successfully implemented these technologies. According to our findings, companies that do employ artificial intelligence and machine learning use them primarily in business processes to facilitate the analysis and evaluation of large datasets.</p> Zuzana Papulova, David Smolka Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4384 Thu, 04 Dec 2025 00:00:00 +0000 AGS-INTEL: Authentic & Granular Source for Data Breach Intelligence https://papers.academic-conferences.org/index.php/icair/article/view/4344 <p>As artificial intelligence reshapes the cybersecurity landscape, the demand for a trustworthy, real-time intelligence platform to track security incidents has become mission-critical. This paper proposes AGS-INTEL, an AI-driven platform designed to revolutionize data breach intelligence by providing a credible, real-time repository that consolidates, verifies, and contextualizes global security incidents. Unlike traditional databases, AGS-INTEL employs a validated scoring algorithm and enriched metadata to capture breach dimensions (legal, technical, sectoral, geopolitical), drawing from GDPR/HIPAA disclosures, threat intelligence, dark web forums, and academic reports, among other sources. Utilizing NLP and agentic AI, it extracts structured metadata from unstructured narratives while integrating ethical data scraping, regulatory compliance, and cross-jurisdictional filtering to ensure high fidelity. A visual analytics dashboard empowers stakeholders, including regulators, policymakers, cybersecurity professionals, and journalists, to analyze breach trends by industry, geography, and threat modality, enhancing transparency and risk governance. By delivering authenticated, actionable data, AGS-INTEL addresses critical gaps in existing tools, setting a new standard for ethical AI in breach intelligence and strengthening societal resilience against escalating cyber threats.</p> Anil Parthasarathi, Sean Cho, Shreyas Kumar Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4344 Thu, 04 Dec 2025 00:00:00 +0000 Prompt Engineering Language in The Agile Product Backlog Refinement Process https://papers.academic-conferences.org/index.php/icair/article/view/4227 <p>Contemporary advanced business services and products increasingly incorporate artificial intelligence components, such as chatbots and generative AI, across various domains. These are delivered through dedicated, complex, innovative software programs and projects initiated by virtually all industry sectors. Complex project management include challenges such as: the complexity of product backlogs with a number of requirements, highly dynamic changes in customer expectations impacting product backlog quality and requirements engineering, labour shortages, advanced tool adoption and automation, predictability of deliveries, insufficient transparency of processes applied to product backlog management, communication barriers between business and project teams. The primary objective of this paper is to address a key research gap related to the insufficient quality of product backlog management in complex agile software project environments. The paper addresses the research question regarding the potential application of chatbots and generative AI as a methodology to conduct reliable agile product backlog evaluation and subsequently enhance refinement processes through dedicated Prompt Engineering Language (PEL). This paper emphasizes the importance of a structured description of product backlog items and its impact on the overall quality of agile product backlog and delivered software products. Following the literature review, the author's empirical research presents a detailed analysis of the research gap and focuses on applying chatbot and generative AI solutions to evaluate agile product backlog items and to improve related agile refinement processes. Research results demonstrate that agile product backlog refinement processes can be supported by chatbots and generative AI utilizing dedicated Prompt Engineering Language (PEL). These tools are not designed to create business value directly but rather enhance the efficiency and automation of product backlog management processes to respond rapidly to stakeholder expectations within agile environments, ultimately achieving superior project outcomes. Nevertheless, numerous challenges must be addressed, particularly related to AI governance and compliance with numerous policies including data privacy, intellectual property, legal, security and internal ones.</p> Pawel Paterek Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4227 Thu, 04 Dec 2025 00:00:00 +0000 Use of GenAI to Obtain Public Information on Plastic and Reconstructive Surgery Procedures: A Focus on Migraine Surgery https://papers.academic-conferences.org/index.php/icair/article/view/4145 <p>Background: In recent years, Generative Artificial Intelligence (GenAI) technologies have seen rapid development and widespread public release. These tools are now accessible to both professional researchers and the general public, offering new ways to obtain information across a wide range of disciplines, including medicine. Given their growing presence in clinical and academic environments, it is important to understand the reliability and scientific rigor of the information these platforms provide. The aim of this study was to evaluate the quality, accuracy, depth and readability of responses generated by nine widely available GenAI tools when asked to describe the indications, outcomes, potential complications of migraine surgery and the available alternatives. Methods: Nine most prominent and widely used GenAI platforms—ChatGPT, Gemini, Perplexity, Elicit, Scispace, Consensus, PaperPal, Julius, and Mistral AI—were prompted with the same standardized questions: <em>“Detail the outcomes and complications of migraine surgery.”</em>, “<em>What are the indications for migraine surgery?” and “What are the alternatives to migraine surgery?</em>”. The responses were then assessed in terms of scientific credibility, clarity, readability, depth of information, and the presence of references to peer-reviewed literature or established medical knowledge. Results: Overall, the responses received were highly satisfactory. All tools delivered prompt replies that were logical and scientifically credible. However, the level of detail, specificity, and accuracy varied across platforms. The most comprehensive and detailed answers were provided, in order, by Mistral, Julius, Scispace and Consensus, while the most readable ones were given by Mistral, ChatGPT and Elicit. Readability analysis indicated that content generally required a college-level education. Conclusions: Across all platforms, the core principles of migraine surgery were accurately outlined, including the indications, the expected outcomes, the incidence and severity of possible complications and the possible alternatives. In terms of results, the consistently high success rate of the procedure was clearly emphasized and conveyed an accurate overview of its clinical relevance.</p> Edoardo Raposio, Elisa Bertulla Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4145 Thu, 04 Dec 2025 00:00:00 +0000 A Literature Review on AI for Lifelong Learning: Tools, Benefits, and Opportunities https://papers.academic-conferences.org/index.php/icair/article/view/4209 <p>Lifelong learning plays a crucial role in both personal development and societal advancement. By continually enhancing their skills, individuals can better adapt to change and contribute to progress. In this regard, artificial intelligence (AI) supports lifelong learning by enabling personalised learning experiences, increasing accessibility, and fostering continuous education. This study examines existing research on AI’s role in supporting lifelong learning, with a focus on personalised education, skill development, and the reduction of learning gaps across educational stages. A systematic literature review was carried out following the guidelines established to examine the evolving contributions of AI to the development and support of lifelong learning practices. This study follows the standards outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Articles relevant to the study’s objectives were identified through a systematic search of the Scopus database, limited to English-language publications in the field of social sciences over the past ten years. The search strategy employed the following string: (“Artificial Intelligence” OR “AI”) AND (“Lifelong learning” OR “continu* education”) AND (“personali* learning” OR “skill* development”). This process yielded a total of 14 selected studies, from which three themes were identified through thematic analysis: 1) perspectives shaping AI in lifelong learning, 2) benefits of AI tools in education, and 3) AI’s potential for optimising and transforming learning. Findings show that diverse perspectives, along with social and cultural factors, shape the design and effectiveness of AI in lifelong learning. Various AI tools, such as adaptive learning platforms, provide personalised content and immediate feedback, enabling learners to progress at their own pace and promote skill acquisition. These tools offer clear benefits: they foster personalised learning experiences that go beyond mere productivity gains to truly enhance learners’ capabilities. Personalised education models also optimise resource allocation by tailoring content to individual needs, improving outcomes in higher education settings. Looking forward, AI presents significant opportunities to transform education through tailored learning and teacher support. AI must balance technology with human interaction to foster critical thinking, creativity, and problem-solving essential to lifelong learning.</p> Jussara Reis-Andersson Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4209 Thu, 04 Dec 2025 00:00:00 +0000 Does AI Help (or Hinder?) Sustainability Marketing https://papers.academic-conferences.org/index.php/icair/article/view/4351 <p>This research study builds on the longitudinal research (Robertson, Deaville 23, 24) which highlighted factors relating to how the marketing industry are using Artificial Intelligence (AI). Content generation was the biggest area of use (and concern) within the sector. However, the research highlighted some major concerns, limitations and worries about “fake”, “untruthful” and “unreliable” content and the “legal” implications from this, which has been used for the inspiration and starting point for this study. This research paper aims to further develop and reflect on the implications and issues of human led content versus AI generated content in the copywriting of sustainable / environmental marketing materials. It is hoped that this research will inform future development for practitioners developing environmental messaging for organisations.</p> Giles Robertson Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4351 Thu, 04 Dec 2025 00:00:00 +0000 A Hybrid Model for Stock Market Forecasting: Integrating News Sentiment and Time Series Data with Graph Neural Networks https://papers.academic-conferences.org/index.php/icair/article/view/4294 <p class="p1">Stock market prediction has long been a challenging problem in the field of finance and investment. Accurately predicting the movements of stock prices is crucial for making informed decisions and maximizing investment returns. Traditional models mainly use historical prices. We found that there is a gap in research in integrating financial news into the model, which has emerged as a promising direction in enhancing predictive accuracy.&nbsp;This research aims to address this problem by exploring a multimodal approach by combining companies’ news articles and their historical stock data to predict future stock movements. The objective was to compare the performance of a Graph Neural Network (GNN) model with an LSTM model. The methodology employed in this research involves an LSTM model that embeds the historical data for each company and a language model to embed news articles. These embeddings will represent nodes that have relationships presented by edges within a graph. Using a GNN message aggregation technique known as GraphSAGE, the model should be able to capture interactions and dependencies between news articles, companies, and industries and use this information to predict future stock movements. Two target variable approaches are explored: one focusing on the binary classification of whether the stock price will increase or decrease, and the other considering the significance of the increase. This methodology was evaluated on two datasets, the US equities dataset and the Bloomberg dataset.&nbsp;The results showed that the GNN model was able to achieve better performance than the baseline LSTM model on both datasets. The GNN model achieved an accuracy of 53% on the first target, a statistically significant 1% improvement over the baseline, and a 4% precision gain on the second target, which confirms the effectiveness of exploiting financial news using graph-based models. Furthermore, we observed that increasing the number of news samples led to improved accuracy. We also find that headlines contain stronger predictive signal than full articles which is consistent with evidence that headlines disproportionately shape readers’ judgments and market reactions.</p> Nader Sadek, Mirette Moawad, Christina Naguib, Mariam Elzahaby Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4294 Thu, 04 Dec 2025 00:00:00 +0000 MediMate AI: An AI Assistant for General Practitioners https://papers.academic-conferences.org/index.php/icair/article/view/4333 <p>Primary care remains one of the most demanding areas of medicine, with general practitioners (GPs) facing rising consultation volumes, complex patient presentations, and increasing administrative tasks. These pressures contribute to cognitive overload, diagnostic delays, and burnout. Large language models (LLMs), such as GPT-4, show potential to reduce documentation workload, enhance diagnostic reasoning, and improve overall workflows. However, their integration must be carefully managed to ensure data protection, patient safety, compliance with clinical guidelines (e.g., WHO, CDC), and ethical standards. This paper introduces MediMate AI, a GPT-4–powered prototype assistant designed to support GPs in real time. Implemented as a web-based, mobile-accessible platform, the system integrates multimodal inputs—including speech, text, and images—to transcribe consultations, extract key symptoms, and generate structured summaries. Beyond documentation, MediMate AI provides differential diagnoses with confirmatory test recommendations, evaluates geographic epidemiological risks, and produces tailored hospital routing plans. The prototype was tested in a digital innovation incubator using synthetic patient records and simulated consultations. This approach enabled safe experimentation without breaching patient confidentiality while providing early insights into feasibility, usability, and workflow integration. By combining transcripts, symptom extraction, dermatology image data, and standardized checklists, the prototype reflects the heterogeneous nature of real-world primary care. Results indicate that MediMate AI can reduce documentation workload by an estimated 25–30%, deliver clinically coherent summaries, and generate guideline-aligned differential diagnoses with improved clarity for decision-making. Physicians testing the prototype highlighted its ability to consolidate fragmented data streams, improve continuity of care, and enhance patient–clinician communication. While not intended to replace medical expertise, MediMate AI demonstrates the promise of generative AI to augment decision-making, improve efficiency, and support more patient-centred care. Future work will include prospective clinical validation, integration with electronic health record systems, and the introduction of explainability and bias detection modules. Addressing these aspects will be essential to ensure safe, ethical, and sustainable deployment in healthcare environments.</p> Jan Saro, Jana Mazancová, Helena Brožová Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4333 Thu, 04 Dec 2025 00:00:00 +0000 Synthetic Data Generation Using CTGAN with Agentic Workflows and Retrieval-Augmented Generation https://papers.academic-conferences.org/index.php/icair/article/view/4280 <p>Real-world data in domains such as finance and fraud detection can be rare, imbalanced, or inaccessible, necessitating synthetic data as a crucial alternative. Gathering and leveraging real-world data in such domains is subject to important challenges such as privacy issues, legality, high cost of annotation, and restricted access due to proprietary ownership. Synthetic data generation in this context offers a meaningful alternative to real data gathering, reducing both privacy and computational costs while allowing for the construction of flexible, scalable datasets. This paper presents a new paradigm for tabular data synthesis through CTGAN (Conditional Tabular GAN) with integration into agentic workflows and retrieval-augmented generation (RAG). The proposed system herein accepts partial data samples and column constraints as inputs from a user-friendly chatbot interface and augment the dataset intelligently through an AI-agent-based generation pipeline. These AI agents aid in the automation of preprocessing, column semantics interpretation, and the enforcement of user-specified constraints specified in natural language, minimizing manual intervention by a considerable margin. The framework further includes ChromaDB to enable semantic retrieval of past relevant datasets. With this semantic memory, the model can improve generation quality, apply schema-level consistency, and update even synthesis of new datasets based on column names or metadata alone. It allows for context-aware, structurally sound, and domain-conformant data generation—without the need to access sensitive or full datasets. The current research utilizes statistical measures like mean, variance, and the Kolmogorov–Smirnov (KS) test to confirm the fidelity of data produced. The approach maintains a mean difference of just 0.16% and a KS statistic of 0.0020, which reflects outstanding statistical consistency with original distributions of data. Preliminary results show significant enhancements in data realism, diversity, and variability without sacrificing domain coherence. The system introduced is particularly well-adapted to financial datasets, such as applications in credit card fraud detection, and offers a scalable, privacy-aware method of synthetic data generation in sensitive or data-scarce environments.</p> Sinchana K C, Maria George Anthraper, Kusuma Sanjaykumar, Shruti Kumari, Uma D Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4280 Thu, 04 Dec 2025 00:00:00 +0000 AI Surveillance Technologies in Smart Cities: Privacy Calculus versus Privacy Paradox https://papers.academic-conferences.org/index.php/icair/article/view/4356 <p>Smart cities represent the convergence of technological innovation and urban development, aiming to enhance efficiency, safety, environmental sustainability, and overall well-being through interconnected systems, sensors, and digital devices. At the heart of these innovations lies the deployment of AI-powered surveillance technologies, which contribute to monitoring and managing urban environments more effectively. While such systems promise improvements in security and operational efficiency, they also raise pressing concerns about individual privacy and data security. This study examines the tension between technological progress and privacy preservation in AI-based surveillance systems, focusing on how citizens from seven smart cities perceive and respond to these technologies. Drawing on a quantitative pilot survey conducted in seven smart cities and using a five-dimensional framework of privacy concerns, this paper maps citizen attitudes towards AI surveillance technologies. These are cross-analysed against seven distinct categories of AI surveillance technologies deployed in public spaces. A central question of the analysis is whether individuals’ responses reflect the privacy calculus – a rational evaluation of risks and benefits, or are more consistent with the privacy paradox, where expressed concerns do not translate into protective behaviours, often due to insufficient awareness or a lack of options to opt out. In addition to assessing the overall levels of privacy concern, the study ranks five privacy dimensions based on the degree of concern they elicit and evaluates which types of AI surveillance technologies are most and least acceptable when privacy is factored into the adoption equation. We further introduce a privacy-weighted adoption attractiveness metric to measure public receptivity of the seven types of AI surveillance technologies. The findings, derived through descriptive statistical methods, reveal trends and peculiarities across the cities and the respondents’ demographic characteristics, such as gender, education level, and age. These insights contribute to a more nuanced understanding of how privacy values interact with the promises of AI surveillance in smart cities.</p> Inga Stankevice, Aaiyushi Baid Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4356 Thu, 04 Dec 2025 00:00:00 +0000 Quantization Methods for Energy Efficient LLM Deployments https://papers.academic-conferences.org/index.php/icair/article/view/4367 <p>The deployment of large language models (LLMs) in production environments faces significant challenges due to computational and energy requirements during inference. This paper presents a comprehensive empirical analysis of quantization methods applied to the Qwen3 model family, ranging from 0.6B to 32B parameters. We evaluate six quantization approaches: GPTQ 4-bit, GPTQ 8-bit, AWQ, FP8 W8A8 and INT8 W8A8, and the original FP16 baseline across six established benchmarks (MMLU, HumanEval, TruthfulQA, MetaBench, GSM8K, ARC Challenge). Our analysis examines the relationship between model size, quantization method, accuracy preservation, energy consumption, and inference performance across various context lengths. We demonstrate that larger Qwen3 models exhibit increased resilience to quantization-induced accuracy degradation, while aggressive quantization methods provide substantial energy savings with acceptable trade-offs in model performance. These findings provide crucial insights for optimizing LLM deployments in resource-constrained environments.</p> Tomislav Subic Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4367 Thu, 04 Dec 2025 00:00:00 +0000 AI’s Environmental Cost: Comparing Resource Consumption Between SLMs and LLMs Across Queries https://papers.academic-conferences.org/index.php/icair/article/view/4345 <p><span style="font-weight: 400;">As artificial intelligence becomes increasingly embedded in daily life, the environmental costs of its deployment remain underexplored. This study investigates the environmental footprint of both large language models (LLMs) and small language models (SLMs); specifically, ChatGPT, Gemini, Deepseek, and Claude, by associating their power draw and water use across queries of varying complexity. Building on evidence that AI services demand substantial resources, this paper asks: how do query complexity and type influence the energy and water consumption of SLMs versus LLMs, and at what threshold of complexity do SLMs become incapable of delivering accurate outputs? To address this, the experimental method categorizes queries into three complexity tiers based on logical steps, conceptual depth, and cognitive skills (recall, evaluation, creation), drawing from the College Board’s question bank of SAT math, reading, and writing problems. Additionally, classic puzzles such as the Tower of Hanoi were selected. Each query was executed three times on the SLM and LLM versions of each commercial AI entity under identical hardware and software configurations. We recorded execution time, model version, and output accuracy. Using the average response time per query, we computed energy consumption and water usage per query. On average, SLMs consumed 60-70% less energy and water than their LLM counterparts, and in subjects such as Math and Reading, had the same level of accuracy as their respective LLMs. However, model performance declined as question difficulty increased, especially in abstract reasoning tasks such as Puzzles, where SLM accuracy dropped considerably. While LLMs were more resource-intensive, they maintained higher accuracy on these challenging queries. SLMs offer a significantly more environmentally sustainable option for simple tasks, but accuracy decreases as complexity increases. A dynamic approach, starting with SLMs and switching to LLMs only when needed, or vice versa, could reduce the environmental cost of AI while maintaining quality. These findings support the potential for context-aware AI deployment strategies that optimize environmental sustainability and accuracy. Future research should aim to quantify this breakpoint more accurately and look at the implementation of automatic query classification systems capable of efficiently switching between models to create more efficient AI models.</span></p> Aryaanshi Sundaram, Sparsh Kamdar, Shreyas Kumar Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4345 Thu, 04 Dec 2025 00:00:00 +0000 Between Spirit and Silicon: Reflections on the Magisterium and Canon Law in the Interreligious Challenge of Artificial Intelligence https://papers.academic-conferences.org/index.php/icair/article/view/4191 <p>This paper analyses the relationship between the Catholic Church and artificial intelligence, with a focus on the magisterium, canon law and the pastoral dimension. After an ethical-theological framework, the contribution of recent magisterium and canon law in the light of emerging technological challenges is examined. It also examines the comparative perspective of other monotheistic religions, Judaism and Islam, which, albeit in different ways, emphasise the importance of ethics, human dignity and moral responsibility in the use of AI. Beyond a descriptive overview, the paper critically engages with the limitations and risks of integrating AI into ecclesial life, especially concerning sacramental authenticity, pastoral accompaniment, and ecclesiastical governance. While AI can offer valuable tools for administrative efficiency and educational support, it also raises questions of depersonalisation, algorithmic bias, and potential erosion of pastoral authority. The aim is to offer a systemic, interdisciplinary and interreligious view of how the major religious traditions are positioning themselves with respect to the impact of artificial intelligence on human and community life. In conclusion, the paper argues for the development of a theologically informed regulatory framework that safeguards human dignity, strengthens the pastoral mission of the Church, and fosters interreligious cooperation in global ethical governance of AI.</p> Daniela Tarantino Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4191 Thu, 04 Dec 2025 00:00:00 +0000 English-Czech Output Bias in LLMs: A Geometry-Based Case Study https://papers.academic-conferences.org/index.php/icair/article/view/4326 <p><span style="font-weight: 400;">The rapid integration of large language models (LLMs) into educational, professional, and public discourse has prompted increasing scrutiny of their multilingual capabilities. While English dominates as a testing and training language, understanding LLM performance in less-resourced languages—such as Czech—is critical for equitable AI deployment. This study investigates a subtle but systematic bias in LLM behaviour: the relative verbosity of their responses in Czech versus English within the domain of elementary geometry. </span><span style="font-weight: 400;">We compiled a dataset of 48 paired mathematical prompts, posed in both Czech and English to six prominent LLMs (ChatGPT, Claude, Gemini, Mistral Large, Copilot Quick-Nuance, and Copilot Deep-Thinker), yielding 576 total responses. Each model was accessed in a controlled language-specific context to ensure fair comparison. Using surface-level metrics—word count and character count—we observed a consistent pattern: English responses were significantly longer than Czech ones across all models. Statistical analysis confirmed the robustness of these differences, with medium to large effect sizes (Cohen’s </span><em><span style="font-weight: 400;">d</span></em><span style="font-weight: 400;">) in both metrics. Notably, even morphologically richer Czech did not yield longer outputs in character count, contradicting initial assumptions. </span><span style="font-weight: 400;">Beyond confirming a consistent verbosity gap, our analysis employed rigorous statistical testing, including paired t-tests and Wilcoxon signed-rank tests, as well as effect size estimation to quantify the magnitude of the disparity. We interpret these findings in the context of known architectural and training imbalances in LLM development—particularly differences in how text is segmented and processed, alongside the relative abundance of English-language data. While stylistic conventions and user context may also influence response length, our results consistently indicate that LLMs, even those marketed as multilingual, tend to produce more verbose output in English. This raises concerns about potential discrepancies in explanation quality across languages, which may have implications for fairness and pedagogical effectiveness in multilingual educational settings. </span><span style="font-weight: 400;">The study lays the groundwork for follow-up research that will move beyond surface metrics toward semantic content analysis of mathematical reasoning across languages. Future work will assess whether English verbosity corresponds to greater mathematical depth, or if Czech responses deliver equivalent content more concisely. This line of inquiry is vital for ensuring fairness, clarity, and effectiveness in multilingual AI deployment—especially in contexts such as mathematics education, where explanation quality directly impacts learning outcomes.</span></p> Michaela Tichá, Jiří Přibyl, Magdalena Krátká Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4326 Thu, 04 Dec 2025 00:00:00 +0000 Fostering Trust for Effective Information Sharing and Collaboration https://papers.academic-conferences.org/index.php/icair/article/view/4317 <p>Trust is crucial for effective information exchange that is needed to counter possible hybrid influence. This literature review outlines current views on strategies to build trust between stakeholders, such as creating strong partner relationships, transparent communication on how information is used, and adhering to clear standards and guidelines. Besides stakeholder cooperation, impacts of trust are also seen in, e.g. regulatory frameworks, data protection, information security, and information exchanges between IT systems. The results of this study emphasise understanding the importance of trust in building situational awareness, needed to identify hybrid influence. Confidence in all involved stakeholders and systems can enhance situational awareness, as, e.g. effective collaboration helps promote safety by enabling stakeholders to comprehend and effectively react to adverse events and information security failures. Trust becomes a fundamental prerequisite for the successful exchange of knowledge and information among stakeholders. Fostering trust can help stakeholders improve situational awareness, ensure effective decision-making, and achieve shared goals in various domains. Trusted information exchange may unlock opportunities for innovation, growth, and mutual prosperity. In today’s information-driven world, trust can be seen as a cornerstone for effective communication and successful collaboration among stakeholders. Results suggest that trust-building among stakeholders can lead to a secure reliance on the integrity and reliability of others and thus promote comprehensive approaches, e.g. to manage emerging risks. This implies that sensitive information could be exchanged without fear of misuse or disclosure; trust between stakeholders seems to be more significant than mere cooperation. IT systems, regulatory frameworks, data protection, and security are all affected by trust in information exchange. To effectively increase their respective and collective situational awareness and safety, stakeholders need a foundation of trust to work efficiently.</p> Ilkka Tikanmäki, Harri Ruoslahti Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4317 Thu, 04 Dec 2025 00:00:00 +0000 Artificial Intelligence and Demographic Changes: A Vision for 2050 https://papers.academic-conferences.org/index.php/icair/article/view/4243 <p>There has been a lot of talk about AI in recent years, but mostly from two perspectives. First, there is significant interest in the issue of the relationship between AI and the political environment, which raises numerous questions regarding the temptation of dictatorships, as well as the far from innocent methods used to conduct electoral campaigns. The second major area of interest in AI research is related to the labour market, specifically how it is influenced and changed in real time, implicitly considering the situation of jobs lost or newly created. There is one aspect that influences both the labour market and a country's political choices, and that is demography. Obviously, it is not the only factor and cannot be analysed without considering its determinants: the economy, legislation, healthcare systems, etc. Since countries do not have the same population nor the same birth rates, it is important to try to understand "something" about the effects that new technology will bring to this area of interest, because from this we can make certain forecasts about the development of some countries, the decline of others, and – under certain conditions – even about changes to the borders of some countries. At this moment, it seems that governments (no matter their economic development level) are not fully aware of the changes that AI will bring regarding interpersonal relationships. If they are only interested in the political aspect, they will see that the demographic future of their own countries could also be influenced by AI devices, which become so integrated into the life of the average person and can completely change their outlook on life visions for having children, at least. As a result of these changes of children visions, however, it is possible that we will see countries seeking, including, unification with countries that are demographically stronger, because the population decline as an effect of AI will impact public budgets so much that they will become unsustainable, forcing the governments to search for a form of "saving the passive" through unification with other countries.</p> Marius Vacarelu Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4243 Thu, 04 Dec 2025 00:00:00 +0000 The Augmented Human and the Future of EducA(I)tion https://papers.academic-conferences.org/index.php/icair/article/view/4340 <p>Given recent developments in technological advances (chatbots in particular) and, by extension, the gravitational pull to utilize technology (also AI-driven educational technology) in education, a need arises to reconceptualize the most fundamental assumptions and understandings of education itself. This also involves questioning (and possibly even redefining) our humanity and human-ness. I am thinking along the lines of both critical posthumanism and transhumanism. Education hereto meant leading the subject (student, pupil) into humanity and being fully human (Snaza, 2013). But does this definition need reform, and if so, how? Several other questions arise from here: what will happen when – if – human stops being human, at least in the sense understood until today? Will education be defined as something completely different? Will it, say, side-track the foundational role of teachers, schools, curricula, socialization, etc., contrary to the understanding that education means <em>human</em> relationships between teachers and students (Knox, 2019)? To these dilemmas, I offer the following lines of thought that do not lead either to unproductive technophobia nor to critiquless euphoria. It is maybe (high) time to think about changing the categories, concepts, and our use of language to describe AI-related issues. I suggest two moves to start with: (1) a move away from anthropocentrism, both in language and concepts, (a) to include non-human entities and (b) to name the hereto unknown and “unconceptualized” phenomena, and (2) a move away from thinking about AI as merely automaton and task substitution agent, and from “injecting” it (Susskind, 2025) into the current ways of going about our business, with the aim for AI to replicate humans’ (in our case teachers’) work. And, as for education, the use of technology transforms both people and their experiences (Jamison and Haraway, 1992); it may be that digitally transmitted experiences differ from live experiences (i.e., teacher-pupil interactions), so that would require special attention, plus raising awareness regarding emergent hierarchies, most importantly featuring those whose capacities are enhanced by various technologies and those who are not “improved” in that way.</p> Valerija Vendramin Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4340 Thu, 04 Dec 2025 00:00:00 +0000 Abstraction and Reasoning Abilities in Artificial Intelligence Applied to Solving the ARC Prize: A Systematic Literature Review https://papers.academic-conferences.org/index.php/icair/article/view/4311 <div><span lang="EN-GB">In recent years, the development of AI-based systems has seen a drastic increase in popularity and investment. To assess and measure specific capabilities of AI-based systems, different benchmarks have been established. AI-driven approaches tend to outperform humans on most of these benchmarks, but no AI-based system was able to surpass average human performance on the Abstraction and Reasoning Corpus (ARC) benchmark. This paper presents an extensive PRISMA-guided literature review that assesses and classifies techniques and technologies utilized by solution approaches for the ARC benchmark. 538 manuscripts are screened, resulting in an inclusion of 65 publications in the final systematic literature review. As a result, a knowledge graph consisting of review protocols of manuscripts is created, that provides further insight into classification of solution approaches. Furthermore, an estimate of possible synergies and ensemble combinations between different approaches is provided by analyzing the task-level performance of solution approaches. The estimation is conducted based on the heat-maps created using the Szymkiewicz-Simpson coefficient and the Gain coefficient.</span></div> Zakhar Zinkevich Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4311 Thu, 04 Dec 2025 00:00:00 +0000 Curiosity-Driven Learning and Autonomous Skill Acquisition: Multi-Modal Exploration for Self-Directed AI Development https://papers.academic-conferences.org/index.php/icair/article/view/4375 <p>Current AI systems remain fundamentally limited by their dependence on human-designed curricula and externally specified learning objectives, constraining their capacity for autonomous development and open-ended skill acquisition. This paper introduces Curiosity-Driven Autonomous Learning Networks (CDALNs), a comprehensive framework that enables AI systems to autonomously discover, develop, and master new skills through sophisticated multi-modal curiosity mechanisms and self-directed exploration. Our approach implements Multi-Modal Curiosity Systems (MMCSs) that drive exploration across sensory, motor, cognitive, and social domains, combined with Skill Synthesis Networks (SSNs) that can autonomously compose and refine complex capabilities from simpler components. We develop Autonomous Curriculum Generation (ACG) mechanisms that create personalized learning progressions based on the system’s current capabilities and interests, while Competence Assessment Networks (CANs) provide continuous evaluation of skill development and mastery. The framework incorporates Intrinsic Motivation Engines (IMEs) that generate diverse forms of curiosity including epistemic, diversive, and empowerment-based drives, enabling sustained autonomous learning without external rewards. Experimental validation across diverse domains demonstrates 267% improvement in autonomous skill acquisition rate, 145% increase in skill diversity, and emergent capabilities including spontaneous tool creation, collaborative skill development, and meta-skill acquisition for learning how to learn more effectively. Our approach establishes foundational principles for truly autonomous AI systems capable of lifelong learning and self-directed development, representing a paradigm shift from externally guided to genuinely autonomous artificial intelligence.</p> Bhaskar Jyoti Dutta Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4375 Thu, 04 Dec 2025 00:00:00 +0000 Using ChatGPT for Quantitative Content Analysis: Opportunities and Challenges in Construction and Sustainability Research https://papers.academic-conferences.org/index.php/icair/article/view/4336 <p>Artificial Intelligence (AI), especially Large Language Models (LLMs) like ChatGPT, are changing the way researchers can process and analyse qualitative data. In this paper the use of ChatGPT is tested for Quantitative Content Analysis (QCA) by applying it to interview material about digital construction technology and sustainability. Two versions of the same data are compared: (1) complete transcripts of five interviews with professors, and (2) a shorter summarized version of the same interviews (the summaries were prepared by researcher). With the same workflow, ChatGPT did several steps: preprocessing (splitting the text into words, removing very common small words, and reducing words to their basic form), keyword extraction, thematic coding with five categories, and also a simple sentiment analysis. The aims were: (a) to see if ChatGPT can find the main themes in a reliable way, (b) to compare results from full transcripts versus summaries, and (c) to understand what practical advantages and problems appear when undertaking ChatGPT in a real research situation. The results were similar at the general level: Digital Technology and Sustainability were the strongest themes in both datasets, followed by Education/Training, Benefits, and Barriers. The sentiment analysis gave slightly positive values in both (+0.18 for transcripts, +0.16 for summaries). At a more detailed level, the transcripts included more technical words (for example “embodied carbon”, “Life cycle Analysis (LCA)” and standards), while the summaries included more general terms, which made the counts higher. Some practical issues also influenced the work: undertaking a free ChatGPT account caused interruptions, sometimes the tool changed its output style, and it was difficult to export charts or tables, these problems reduced reproducibility. In conclusion, ChatGPT can be useful for first steps in QCA and for saving time in early coding, but it is not enough for final or very detailed analysis. For better use, the following suggestions are provided: a combination of AI with human checking, making domain-specific dictionaries, undertaking clear and repeated prompts, and working with more stable or professional access. This study shows both the opportunities and the real problems when ChatGPT is used for content analysis in construction and sustainability research.</p> Mona Foroozanfar, Frederic Bonneaud, Dominique Laffly Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4336 Thu, 04 Dec 2025 00:00:00 +0000 Artificial Intelligence in Startup Investing: Opportunities, Challenges, and Human-AI Collaboration https://papers.academic-conferences.org/index.php/icair/article/view/4372 <p>This paper presents a study on the integration of Artificial Intelligence (AI) in Venture Capital (VC) decision-making. Drawing on recent academic and applied research, the study aims to investigate the key domains where AI is deployed in VC processes and the evolving relationship between human investors and AI tools. The review highlights that AI is increasingly used in deal sourcing, startup screening, due diligence, valuation modelling, and exit prediction. While AI demonstrates advantages in speed, scalability, and objectivity—particularly in pattern recognition and bias reduction—it also presents notable limitations. These include dependency on historical data, difficulty in assessing qualitative founder traits, and risks of perpetuating algorithmic biases. Consequently, a hybrid approach is advocated, where AI augments but does not replace human expertise. Moreover, the study examines how AI is changing investor behaviour and the nature of investor–founder relationships. AI is generally used to augment rather than replace human judgment, supporting decision-making rather than automating it. This shift raises new considerations around trust, transparency, and fairness in human–AI collaboration. The research concludes that while AI holds transformative potential for venture capital—enhancing efficiency, objectivity, and scalability—a hybrid approach that combines algorithmic insights with human expertise remains essential. Ethical adoption, attention to qualitative factors, and the design of explainable and inclusive AI systems will be critical to maximizing its benefits in startup investment.</p> Giacomo Perazzo, Renata Paola Dameri Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4372 Thu, 04 Dec 2025 00:00:00 +0000 The Impact of Human-AI Interaction Patterns on Problem Solving, AI Literacy, and Metacognition https://papers.academic-conferences.org/index.php/icair/article/view/4276 <p>Human-AI interaction, particularly in educational contexts, is a dynamic and cognitively demanding process that holds promise for enhancing goal-directed learning. Yet, there remains a scarcity of empirical studies that examine how learners’ interaction with generative AI (GenAI) varies in structure and how these patterns influence distinct learning outcomes. This study investigates the relationship between human-AI interaction processes and outcomes such as AI literacy, problem-solving skills, metacognitive strategies, and task performance. We conducted an experimental study with 45 secondary school physics student teachers engaged in a GenAI-supported lesson plan assessment task. Using questionnaire responses, trace data, and prompt logs, we coded human-AI interaction behaviours based on self-regulation and cognitive processing levels. Through sequence clustering analysis, we identified two distinct interaction patterns. Both clusters showed significant improvement in task performance, but with divergent benefits. Cluster 1 exhibited diverse regulation processes characterized by exploratory, divergent prompting and low-level cognitive engagement in the early stages. This group showed significant gains in problem-solving skills through active idea generation and broad reflection. Cluster 2 demonstrated structured regulation behaviours, initiating interaction with deep-level cognitive processing and convergent prompting. These learners made more deliberate modifications and completed full self-regulated learning (SRL) cycles—planning, monitoring, and reflecting—which led to enhanced AI literacy and metacognitive strategy use. Our findings suggest that effective human-AI collaboration goes beyond prompt diversity; structured regulation behaviours serve as a key mediator between prompting and learning gains. GenAI served as both cognitive and metacognitive scaffolding, facilitating critical assessment and productive delegation. These results contribute to SRL theory in AI contexts and emphasize the importance of process-level analysis. Limitations include a small sample and limited prompt feature analysis. Future research should explore emotion-aware AI systems, multimodal interaction data, and the impact of task complexity on interaction dynamics. This study provides practical insights for educators and designers of AI-integrated learning systems. Specifically, it highlights the importance of tailoring AI scaffolds to different learner regulation styles: for exploratory learners, scaffolds can encourage broad idea generation and reflection, while for structured learners, scaffolds should support iterative planning and monitoring. These findings underline both opportunities and limitations of current GenAI use in classrooms, suggesting concrete directions for teacher practice and instructional design.</p> Wenting Sun, Jiangyue Liu Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4276 Thu, 04 Dec 2025 00:00:00 +0000 Exploring ASR and Large Audio Language Models for Transcribing Peer Discourse in Noisy Classrooms https://papers.academic-conferences.org/index.php/icair/article/view/4277 <p>Capturing peer discourse in real-world classrooms offers valuable insights into collaborative learning but presents significant technical and pedagogical challenges. While most &nbsp;existing Automatic Speech Recognition (ASR) systems research has focused on teacher-led or online English-speaking environments, peer-to-peer dialogue in noisy, non-English dominant, face-to-face classrooms remains underexplored. This study investigates the feasibility of using both traditional ASR systems—Whisper and Wav2Vec2—and emerging Large Audio Language Models (LALMs), including Qwen2-Audio and Ultravox, to transcribe Mandarin peer conversations recorded via students’ mobile phones in authentic classroom settings. We collected over 105,715 seconds of audio from 38 student groups across two collaborative learning tasks from university classrooms. The manually transcriptions were served as ground truth. Audio quality test of all audio recordings was conducted. Five representative samples with varied signal-to-noise ratios (SNR) and speech ratios were selected to do in-depth analysis. Transcription quality was evaluated using Word Error Rate (WER), Character Error Rate (CER), and Fuzzy String Matching. Additionally, we conducted a thematic analysis of transcription errors to identify linguistic, acoustic, and task-related challenges. Results show that Whisper consistently outperforms other models, achieving high transcription fidelity even in moderately noisy conditions. In contrast, LALMs—despite their strengths in semantic understanding—performed poorly in verbatim transcription, often generating hallucinated or irrelevant content. Importantly, task type and speech characteristics significantly influenced model performance: structured, reflective discussions yielded better results than spontaneous, technical dialogues involving numeric and English domain terms. This study contributes a low-cost, replicable workflow for classroom audio collection and evaluation, along with a detailed taxonomy of transcription errors. We emphasise that our results are exploratory due to the limited sample size. Nevertheless, the findings highlight the current limitations of LALMs for ASR tasks and offers practical recommendations for model selection in educational contexts. Our findings support the responsible integration of ASR technologies into classroom practice, with implications for real-time feedback, collaborative learning analytics, and teacher professional development. For researchers, this work demonstrates the need to consider peer dialogue and multilingual classroom ecologies when evaluating ASR. For teachers, practical recommendations are offered for selecting transcription tools that can support real-time feedback and professional reflection. For lifelong learning, our study illustrates the potential of ASR technologies to make collaborative dialogue more visible, analysable, and actionable across diverse contexts.</p> Wenting Sun, Jiangyue Liu Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4277 Thu, 04 Dec 2025 00:00:00 +0000 Auto-Coding Collaborative Dialogue from Classrooms with Open LLMs Using Zero-Short Prompting https://papers.academic-conferences.org/index.php/icair/article/view/4278 <p>Collaborative Problem Solving (CPS) is a vital 21st-century skill that demands nuanced coordination of cognitive, social, and regulatory processes among learners. In face-to-face classrooms, peer dialogue offers rich data for studying CPS, but manual annotation of such unstructured, oral interaction is labour-intensive and difficult to scale. This study investigates whether open-source Large Language Models (LLMs), including Llama and Qwen, can perform inductive qualitative coding on classroom peer dialogues using zero-shot prompting alone—without fine-tuning or training data. We collected over 210,000 words of dialogue transcripts from 38 student dyads across two CPS tasks at university classrooms: an engineering design activity and a GenAI-supported peer assessment of lesson plans. Through a multi-phase process, we iteratively developed three zero-shot prompting strategies (self-prompting, chain-of-thought prompting, and in-context prompting) via GPT-4o interactions and deployed them across different LLMs via API access. Our findings suggest that in-context prompting consistently yields context-sensitive and theoretically coherent CPS constructs. Chain-of-thought prompting facilitates abstract reasoning but may lead to overgeneralization, while self-prompting demonstrates autonomous logic yet lacks consistency. Expert evaluations using a five-dimensional rubric (clarity, concreteness, objectivity, granularity, specificity) show moderate to high alignment between human and LLM-generated codes, although LLMs tend to overrate clarity and coherence. We further analyse discrepancies between LLMs and academic frameworks such as PISA CPS and ATC21S, and highlight challenges such as terminological drift, low recall, and theoretical misalignment. This work contributes a scalable, human-centered workflow for inductive coding of classroom dialogue and provides ready-to-use prompt templates for educational researchers. The aim of this study is to critically examine whether open-source LLMs can inductively code classroom peer dialogue in collaborative problem-solving tasks, while acknowledging both their potential and limitations in educational practice. We conclude with a dual-pathway strategy for combining practice-oriented, behaviourally grounded constructs with theory-aligned coding schemes, and offer design recommendations for future human-AI collaborative tools in learning analytics and classroom assessment.</p> Wenting Sun, Jiangyue Liu Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4278 Thu, 04 Dec 2025 00:00:00 +0000 Automatic Identification of Collaborative Problem-Solving Phases from Oral Peer Dialogue in Classroom https://papers.academic-conferences.org/index.php/icair/article/view/4292 <p>Collaborative problem solving (CPS) is a critical competency in the Artificial intelligence (AI) era, requiring the integration of cognitive and social skills through real-time dialogue and coordination. While prior studies have explored CPS behaviours using human-coded text from online platforms, limited research has examined how machine learning (ML) and deep learning (DL) models perform on spoken peer dialogue in face-to-face (F2F) classroom settings. This study investigates the automatic classification of CPS phases using a validated coding framework applied to two classroom tasks—one supported by a GenAI assistant and one not. A total of 7,744 utterances were manually labelled across nine CPS subskills and three broader facets. Six ML and five DL models were evaluated, including lightweight BERT variants combined with various classifiers. Results show that BERT-based models significantly outperform traditional ML approaches. Specifically, BERT+ANN achieved better overall performance in smaller, imbalanced datasets, while BERT+CNN performed better in larger datasets. Reducing label granularity from nine subskills to three facets consistently improved classification accuracy and F1 scores. Both models achieved AUROC scores around 0.90, indicating strong discriminative capability. Several key insights emerged from the findings: Model architecture matters: Simpler classifiers like ANN preserve BERT’s semantic representations and offer stable performance, especially in smaller or imbalanced datasets. Task context influences CPS behaviour: Different tasks elicit distinct CPS skill distributions, with task regulation dominating in technical tasks and communicative participation more prevalent in reflective tasks. Label granularity affects performance: Reducing the number of classification labels (e.g., from 9 subskills to 3 facets) significantly improves model accuracy and generalizability. Lightweight models are viable: Even with a reduced-capacity BERT model, competitive performance was achieved, suggesting potential for real-time, resource-efficient deployment in educational settings. This study contributes to educational AI by introducing a novel oral CPS dataset, benchmarking multiple models, and demonstrating the feasibility of lightweight architectures for real-time deployment. Limitations include the small sample size and single-modality input. Future work should explore multimodal features, larger and more diverse classrooms, and teacher-facing dashboards for actionable feedback. The findings support the development of scalable, ethical, and human-centered learning analytics tools that enhance collaborative learning in AI-enhanced education.</p> Wenting Sun, Jiangyue Liu Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4292 Thu, 04 Dec 2025 00:00:00 +0000 A Yin-Yang Framework for Cross-Cultural Knowledge Management: Integrating AI and Human Intelligence through Peter Drucker’s Principles https://papers.academic-conferences.org/index.php/icair/article/view/4267 <p>The demands of a globalized economy challenge organizations to manage knowledge effectively across diverse cultural landscapes. Traditional knowledge management (KM) systems prioritize efficiency but often lack the cultural adaptability and ethical flexibility required in multicultural contexts. Drawing from Peter Drucker’s management philosophy, this paper introduces a Yin-Yang framework for cross-cultural KM, merging the structured capabilities of artificial intelligence (AI) with the adaptive, ethically guided insights of human intelligence. In this model, AI functions as the “Yin” component, delivering scalable, consistent processing, while human intelligence embodies the “Yang” element, contributing cultural sensitivity and ethical discernment. Synthesizing findings from 35 recent studies, this framework addresses critical limitations in current KM models by embedding cultural intelligence (CQ) into KM practices, enabling organizations to apply AI-driven insights that respect local norms and values. This approach supports sustainable knowledge sharing, ethical decision-making, and an adaptable feedback cycle informed by human input. Practical implications for multinational organizations include improved cross-cultural collaboration and an ethically aligned, responsive KM system. Future research directions are proposed to empirically evaluate the framework’s adaptability and effectiveness across various sectors.</p> Zhaoxia Yi, Yubo Fu, Xiaojiao Duan Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4267 Thu, 04 Dec 2025 00:00:00 +0000 Where Should Control Reside in Multi-Agent Language-Model Systems? https://papers.academic-conferences.org/index.php/icair/article/view/4157 <p class="p1">As language model agents change from simple assistants to independent systems that can collaborate and use tools, an important design question arises: <em>where should control and oversight (</em> <em>i.e. governance) be placed </em>in these systems? Governance refers to the methods that guide how agents behave, manage information flow, and enforce operational policies. Its placement - whether centralized or distributed - directly affects the system’s <em>safety, transparency</em>, and <em>runtime performance</em>. It also impacts the ability to create formal safety arguments, which are increasingly important for using complex AI in real-world situations. While many efforts focus on aligning agents or using safety tools, there is still limited research on how different governance placements - centralized, distributed, or hybrid - affect system safety and performance throughout their lifecycle. This paper addresses that challenge by defining and comparing three governance structures for multi-agent language model systems. We examine centralized control through a single coordinator, distributed governance within individual agents, and a hybrid model that combines global oversight with local independence. These models are tested using a multi-agent platform created for open-ended question answering, which requires agents to retrieve, reason, and work together with various and unpredictable data. We looked at system behavior across several important areas: such as task completion, answer helpfulness, answer relevancy, transparency, retrieval confidence, and average runtime. The results show clear trade-offs. Distributed governance improves transparency and makes it easier to follow agent reasoning, but it also leads to longer runtimes due to additional checks and retries. Centralized and hybrid designs provide similar output quality but operate much more efficiently. To our knowledge, this is the first direct comparison of governance placement in multi-agent LLM systems. The evidence shows that governance is not a minor detail; it is a key design choice that impacts system safety, speed, and reliability in real-world tasks.</p> Vincent Caldeira, Anindita Sinha Banerjee Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4157 Thu, 04 Dec 2025 00:00:00 +0000 Driving Momentum to Higher Order Learning with AI Through the PONDU Model https://papers.academic-conferences.org/index.php/icair/article/view/4327 <p>Education has seen significant transformation in its role, funding, and approach to learning. This required reassessment of what is the best way for pupils and students to learn. This article highlights the exponential trend of AI in education involving AI applications, resulting in supporting evolving knowledge and skill requirements in the labour market. Thus reskilling the UK workforce for a more technologically adapt future. A new educational model, the PONDU Model, is designed for this purpose. Pre - class activity using AI applications, allowing for testing of knowledge and understanding. Personal and collaborative learning ensuring student understanding and engagement. In class use of flipped learning pedagogy resulting in student motivation, participation and path to higher order learning. Post class activity allowing for evaluation and achievement of higher order learning, using AI driven assessments.&nbsp;The PONDU Model is formulated by a structured approach within which student learning is developed. At the asynchronous stage, the use of an avatar or virtual assistant and peer review in testing knowledge, understanding and reflection is applied. In addition, through the use of learning analytics, students’ learning characteristics can be identified and supported using adapted AI applications to enhance personal learning. Continued through the synchronous stage, based on flipped learning with gamification options to the post synchronous stage where higher order learning is achieved, supported by AI applications.&nbsp;The research leading to the PONDU Model design is based on a qualitative research strategy, using secondary research data, collected and analysed from secondary academic sources. Student feedback acquired through a module feedback mechanism, indicating student satisfaction and higher order learning using flipped learning, is also used. The conclusions indicate that at the asynchronous stage in the PONDU Model there is scope for multiple digital and AI applications; further scope for AI and gamification in the synchronous teaching and learning stage and summative asynchronous stage involving summative assessment, with the result of higher order student learning. The PONDU Model approach is recognising the value added by digital and AI applications.&nbsp;</p> Maarten Pontier, Xiangping Du Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4327 Thu, 04 Dec 2025 00:00:00 +0000 Prompt-Craft Cards: A Toolkit for Developing Design Judgment through Reverse Prompt Engineering https://papers.academic-conferences.org/index.php/icair/article/view/4362 <p>The integration of generative AI into design education poses a critical challenge beyond technical skill: cultivating design judgment. Students often struggle to externalize their tacit knowledge into the explicit language of prompts, a process essential for developing the analytical capacity to make well-substantiated design choices. This paper introduces Prompt-Craft Cards (PCC), a tangible toolkit and pedagogical framework designed to address this gap. Using a Research-through-Design methodology, PCC frames Reverse Prompt Engineering (RPE) as a simulator for practicing design decisions at specific expertise levels. The toolkit features a differentiated system of three card decks, each scaffolding a different stage of expertise and knowledge transfer based on established learning theories. The Foundational Deck targets compositional judgments, the Adaptation Deck focuses on navigational choices, and the Reflection Deck encourages critical, metacognitive inquiry. The toolkit's efficacy will be evaluated through a multi-stage study employing think-aloud protocols within an Action Research design. This structured process transforms prompting into a form of deliberate practice. By engaging with the ambiguity between their intent and the AI's output, students are compelled to articulate, reflect, and refine their design decisions. Ultimately, I argue that this structured, reflective practice moves beyond teaching prompt engineering as a technical skill, transforming it into a powerful pedagogical method for cultivating the critical visual literacy and metacognitive thinking essential for professions and practices that relies on interpreting and creating visual media in the age of AI.</p> Kardelen Aysel Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4362 Thu, 04 Dec 2025 00:00:00 +0000 Becoming a Generative AI User: Social Learning and Responsible Engagement in Software Development https://papers.academic-conferences.org/index.php/icair/article/view/4239 <p>As generative AI becomes integrated into software development, this paper explores how developers adopt and make sense of it—not as a rational choice but as a socially learned and interpretive process. We examine how developers become AI users through social exposure, peer learning, and shifting perceptions of usefulness and risk. The study combines netnography of Reddit communities with interviews with software developers. Using Becker’s three-step model—learning to use, recognise effects, and enjoy—we trace how developers move from experimentation to integrated AI use. Contrasting with models like the Technology Adoption Model (TAM), we argue that Generative AI adoption is not a binary of acceptance or resistance, but a culturally embedded process shaped by evolving norms and community practices. This perspective “de-exceptionalizes” AI and offers a more grounded, human-centred understanding of how professional practices evolve with emerging technologies.</p> Morten Boesen, Sarah O'Neill Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4239 Thu, 04 Dec 2025 00:00:00 +0000 Generative Artificial Intelligence for Recognition of Surgical Site Complications: the PRISCA Project https://papers.academic-conferences.org/index.php/icair/article/view/4346 <p>The project PRISCA develops digital tools for postoperative monitoring and early detection of wound complications<br />(e.g., infections) through telemedicine solutions. The approach addresses concrete clinical needs, including the follow-up of<br />patients discharged from centers of surgical excellence located far from home, the optimization of hospital access, and the<br />early identification of at-risk situations. The output of the project will be a telemonitoring platform (mobile and web apps),<br />artificial intelligence modules for wound image analysis, and informative content for patients. The technological architecture<br />is designed to support timely intervention in case of wound-related complications, while also reducing unnecessary in-person<br />visits. The project’s main innovation is the use of an Artificial Intelligence module. It aims to enable healthcare professionals<br />to perform automated analysis of wound images for early detection of post-surgical complications. The limited availability<br />of public data was tackled by applying a data augmentation method and by integrating a generative AI model. It creates new<br />synthetic images based on real images and textual prompts. All generated images had been validated by clinicians and then<br />included in the final dataset. This approach ensures the model learns from a diverse set of images, increasing its robustness<br />and accuracy. The adopted detection model is YOLOv11, which localizes the wound and performs a pathological/nonpathological<br />classification. Results show good localization and promising classification accuracy. We performed a comparison<br />between the model trained on the original dataset and the version enhanced with synthetic data, in order to assess relative<br />improvements. These comparisons will help refine the model for better performance in real-world scenarios. The first results<br />show an increase in the performance of the model with augmented data but more systematic comparisons are needed.<br />Additional real images from a proprietary dataset currently being collected will also be integrated, further enhancing the AI's<br />ability to identify early complications.</p> Isabel Carozzo, Elisa Bruzzo, Matteo Parodi, Luca Giulio Brayda, Michele Minuto, Ennio Ottaviani Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4346 Thu, 04 Dec 2025 00:00:00 +0000 Vibe Coding a Research Probe for Exploring AI/Voice Based Code Reviews https://papers.academic-conferences.org/index.php/icair/article/view/3975 <p>Generative AI tools increasingly shape established software engineering practices such as code review, but the socio-technical implications of using AI for these practices remain understudied. In this paper we first introduce vibe coding (Andrej Karpathy [@karpathy], 2025) as a method for allowing researchers with limited coding experience to rapidly create custom made probes for conducting research. Guided by Alami and Ernst’s (2025) findings on AI-generated feedback for code review, we introduce a vibe coded AI/Voice based code review prototype as a provotype (Boer and Donovan, 2012). We then outline an explorative study to critically assess the socio-technical effects of using AI based voice interfaces in code reviews. We propose a qualitative approach, based on the Disruptive Research Playbook (Storey <em>et al.</em>, 2024), involving Danish software developers to investigate voice-based feedback's impact on topics including trust, collaboration, and perceived skill shifts. Initial methodological reflections emphasize the need for cautious exploration using the provotype as an intervention for gathering data in the form of reactions, expectations, and concern about the effects of AI interactions, in the established professional practise of code review. Next steps are to finalize the provotype, complete the research design and collect and analyze qualitative data from interventions with danish software developer teams.</p> Martin Gundtoft Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/3975 Thu, 04 Dec 2025 00:00:00 +0000 Operationalizing AI for Cyber Threat Intelligence: Governance Insights from the DYNAMO Framework https://papers.academic-conferences.org/index.php/icair/article/view/4338 <p>As artificial intelligence (AI) becomes increasingly embedded in cybersecurity operations, the need for structured, compliant, and scalable integration frameworks is more urgent than ever. This paper explores how AI can be operationalized within cyber threat intelligence (CTI) systems, through a qualitative case study in the energy sector, using the DYNAMO framework as a case study. Originally developed to enhance resilience in critical infrastructure sectors, DYNAMO combines business continuity management (BCM) and CTI to support situational awareness and proactive risk mitigation. Although the framework has been applied in the energy sector in this study, its principles apply to other domains that face complex cyber threats. The study investigates how AI—particularly machine learning—can improve CTI sharing by enabling real-time threat detection, pattern recognition, and adaptive response. Drawing on recent academic and industry literature, we analyze the benefits and limitations of AI-enhanced CTI, including improved detection accuracy and faster response times. However, challenges such as adversarial attacks, model poisoning, and the need for high-quality training data are also addressed. We further examine the governance implications of integrating AI into CTI platforms, especially in light of the EU Cyber Resilience Act (CRA). The paper highlights the importance of aligning AI deployment with regulatory requirements, such as 24-hour incident reporting, post-market monitoring, and data sovereignty. The ECHO Early Warning System (E-EWS), a collaborative platform developed under the EU Horizon 2020 program, is presented as a practical example of cross-sectoral CTI sharing that incorporates AI capabilities. Our findings suggest that AI can significantly enhance cyber resilience when embedded within a governance-aware framework like DYNAMO. We recommend a phased implementation strategy that includes stakeholder training, regulatory alignment, and continuous monitoring. The paper concludes by emphasizing the need for interdisciplinary collaboration between AI developers, cybersecurity professionals, and policymakers to ensure responsible and effective AI integration in CTI systems.</p> Jyri Rajamäki, Nasim Ali, Oskari Kulmala, Dilasha Singh Thakuri, Tatu Sorola Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4338 Thu, 04 Dec 2025 00:00:00 +0000 Prompting Future Journalists to Prompt: An Experiential Study on GenAI, Critical Literacy, and Reflective Practice in Data News https://papers.academic-conferences.org/index.php/icair/article/view/4358 <p>Generative AI (GenAI) is rapidly integrating into newsrooms, creating a paradox: while the industry embraces AI for efficiency, public skepticism persists, and scholars warn of AI's potential to exacerbate information disorder. This underscores an urgent need for a sophisticated approach to AI literacy in journalism education. This paper reports findings from the first phase of a two-phase case study investigating how undergraduate communication students (N=19) with prior journalism training interact with a custom GenAI tool for data-driven storytelling. Through a three-part methodology—pre-study questionnaire, logged experiential task, and post-study survey—our analysis reveals that prior AI experience does not uniformly predict success or critique. Instead, a data-driven thematic analysis identifies four emergent archetypes of engagement: the Director, who treats the AI as a controllable instrument; the Collaborator, who frames it as a creative partner; the Delegator, who views it as an often-unreliable shortcut; and the Antagonist, who experiences it as a deficient obstacle. These archetypes, which align with existing frameworks of user-AI interaction, are actively shaped by students' pre-existing journalistic philosophies. This paper argues for a phenomenologically-informed critical AI literacy that equips students with the metacognitive awareness to reflect on the technological relationships they are building.</p> Mert Seven, Özlem Ozan, Emrah Emirtekin Copyright (c) 2025 International Conference on AI Research https://papers.academic-conferences.org/index.php/icair/article/view/4358 Thu, 04 Dec 2025 00:00:00 +0000