Strategic Injustice in AI-Assisted Goal Setting: The Marginalization of Social Objectives

Authors

  • Mikolaj Pindelski SGH Warsaw School of Economics

DOI:

https://doi.org/10.34190/ecmlg.21.1.4137

Keywords:

SDG, LLM, Goals discrimination, Artificial Intelligence, OpenAI, Strategic discrimination

Abstract

The study investigates how artificial intelligence driven decision making influences the prioritization of traditional financial objectives versus sustainability related goals. Using a modified “cake sharing” model in the NetLogo simulation environment, it was examined three objectives: improving ROI, increasing revenue, reducing costs, and ensuring sustainability of sales activities. Ther were run 500 iterations in the simulation phase. The simulation paired the sustainability oriented goal (SDG) with each financial objective. There was observed potential systematic discrimination against non-financial targets. Input data were derived from CD Projekt’s 2024 financial and non-financial reports, analyzed with ChatGPT to recommend strategic priorities before integration into the NetLogo framework. Findings reveal that AI generated recommendations favor financial objectives, particularly ROI, leading to marginalization of sustainability goals. It confirmed the hypotheses that AI supported goal setting can reinforce a bias. While the model is simplified and limited to four objectives under controlled conditions, the results underscore the risk of relying on generative AI in strategic planning. The study highlights the need for managers to critically assess algorithmic assumptions in AI-supported decision making.

Downloads

Published

2025-11-04