Human-centered AI in Healthcare – Balancing Patient Autonomy and Physician Judgment
DOI:
https://doi.org/10.34190/icair.5.1.4309Keywords:
AI healthcare, shared decision making, explainability, autonomy, practice-based judgmentAbstract
This article outlines ethical issues related to integrating artificial intelligence (AI) into shared decision making (SDM), focusing on how to meet: (1) the need for explainability in enacting autonomy, (2) the need for respecting patients’ values and preferences in treatment decisions, and (3) the impact of AI on physician expertise. First, it is argued that the kind of explainability required to support patient and physician autonomy can be met through rigorous model validation combined with context-sensitive post hoc explanations. Next, turning to a patient perspective, the article argues against the assumption that having AI pre-rank treatment recommendations undermines patient autonomy and therefore ought to be avoided. Instead, the article recognizes AI’s potential to reduce cognitive overload and emphasizes balancing AI-guided decision-making properly. Subsequently, the physician’s perspective is considered, analyzing how AI impacts physician expertise, particularly in light of automation bias, deskilling, and the erosion of practice-based judgment. The article warns against a shift toward actuarial decision-making driven by algorithmic risk stratification, which may compromise core ethical principles. The article concludes by promoting human-centered AI integration to enhance human agency—empowering patients to make informed choices and allowing physicians to exercise sound clinical judgment.