Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Right to Explanation in AI: In a Lonely Place
1
Zitationen
3
Autoren
2025
Jahr
Abstract
Technology is increasingly being used in decision-making in all fields, particularly in health care. Automated decision-making promises to change medical practice and potentially improve and streamline the provision of health care. Although the integration of artificial intelligence (AI) into medicine is encouraging, it is also accompanied by fears concerning transparency and accountability. This is where the right to explanation has come in. Legislators and policymakers have relied on the right to explanation, a new right guaranteed to those who are affected by automated decision-making, to ease fears surrounding AI. This is particularly apparent in the province of Quebec in Canada, where legislators recently passed Law 5, an act respecting health and social services information and amending various legislative provisions. This paper explores the practical implications of Law 5, and by extension of the right to explanation internationally, in the health care field. We highlight that the right to explanation is anticipated to alter physicians' obligation to patients, namely the duty to inform. We also discuss how the drafting of the legislation on the right to explanation is vague and hard to enforce. This dilutes the potential of the right to explanation to provide meaningful protections for those affected by automated decisions. After all, AI is a complex and innovative technology and, as such, requires complex and innovative policies. The right to explanation is not necessarily the answer.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.