Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
What’s in the Box?: Uncertain Accountability of Machine Learning Applications in Healthcare
0
Zitationen
2
Autoren
2020
Jahr
Abstract
Machine learning is an increasingly significant part of modern healthcare, transforming the way clinical decisions are made and health resources are managed (Wiens and Shenoy 2018). These developments have been closely scrutinized by bioethicists and legal scholars, who have identified machine learning’s potentially harmful impacts on patients and clinicians. Danton S. Char and colleagues have proposed a well defended pipeline model for identifying and addressing ethics concerns, with the goal of mitigating the harmful impacts of machine learning systems and helping to better integrate them into healthcare systems. The paper is an important and productive step toward ensuring that artificially intelligent tools can be used to safely promote human health. As the authors state explicitly, the proposed pipeline model does not address the issue of “who should be responsible for what,” but is rather intended to provide a structured framework in which to consider ethical implications raised by machine learning applications in health.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.