Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Informed Consent to AI-based Decisions in Healthcare: Must Patients Understand the AI’s Output?
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The use of artificial intelligence (AI) in medical decision-making challenges patients’ right to informed consent and autonomy; many AI systems lack transparency and explainability, making it difficult to explain AI-based treatment recommendations. This article investigates to what extent patients’ right to fully informed consent requires an understanding of the AI output, underlying decision-making processes and associated risks. Analysing European human rights instruments and patients’ rights in Norway, it examines whether the right to informed consent adequately ensures patient understanding in AI-based healthcare. The article argues that the extent of patient understanding required of AI outputs cannot be derived solely from the right to information, but demands a purpose-based interpretation, determined by the impact on patients’ trust in AI-based recommendations and the protection of bodily integrity and autonomy. Informed consent entails flexibility to include information about the AI output and decision-making processes to ensure patients fully understand the implications of AI-based interventions. Opaque AI precludes this disclosure, undermining patients’ autonomy, and calls into question the adequacy of the informed consent doctrine. AI may restrict the flow of information between healthcare professionals and patients, amplify information asymmetry, and diminish informed consent as a tool for empowerment. Europe’s AI governance framework, particularly the EU AI Act, fails to address these challenges satisfactorily; health-specific regulation and enhanced AI literacy are needed. To safeguard informed consent in the AI era, the right to information must be strengthened and the required level of explainability defined.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.