Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence algorithms in healthcare: Is current FDA regulation sufficient? (Preprint)
0
Zitationen
8
Autoren
2022
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> Given the growing use of machine learning (ML) technologies in healthcare, regulatory bodies face unique challenges in governing their clinical use. Under the regulatory framework of the Food and Drug Administration Agency (FDA), approved ML algorithms are practically ‘locked’, preventing their adaptation in the ever-changing clinical environment, defeating the unique trait of ML technology in learning from real-world feedback. At the same time, regulations must enforce a strict level of patient safety in order to mitigate risk at a systemic level. Given that ML algorithms often support, or at times replace the role of medical professionals, we have proposed a novel regulatory pathway analogous to the regulation of medical professionals, encompassing the lifecycle of an algorithm from inception, development to clinical implementation and continual clinical evaluation. We then discuss in-depth technical and non-technical challenges to its implementation, and offer potential solutions in order to unleash the full potential of ML technology in healthcare, while ensuring quality, equity and safety. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.