Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ensuring AI explainability in healthcare: problems and possible policy solutions
15
Zitationen
2
Autoren
2022
Jahr
Abstract
AI promises to address health services’ quality and cost challenges, however, errors and bias in medical devices decisions pose threats to human health and life. This has also led to the lack of trust in AI medical devices among clinicians and patients. The goal of this article is to assess whether AI explainability principle established in numerous ethical AI frameworks can help address these and other challenges posed by AI medical devices. We first define the AI explainability principle, delineate it from the AI transparency principle, and examine which stakeholders in healthcare sector would need AI to be explainable and for what purpose. Second, we analyze whether explainable AI in healthcare is capable of achieving its intended goals. Finally, we examine robust regulatory approval framework as an alternative – and a more suitable – way in addressing challenges caused by black-box AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.