Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in medical device software and high-risk medical devices – a review of definitions, expert recommendations and regulatory initiatives
54
Zitationen
15
Autoren
2023
Jahr
Abstract
The level of clinical evidence required should be determined according to each application and to legal and methodological factors that contribute to risk, including accountability, transparency, and interpretability. EU guidance for MDSW based on international recommendations does not yet describe the clinical evidence needed for medical AI software. Regulators, notified bodies, manufacturers, clinicians and patients would all benefit from common standards for the clinical evaluation of high-risk AI applications and transparency of their evidence and performance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- University Hospital of Wales(GB)
- Cardiff University(GB)
- KU Leuven(BE)
- Consorci Institut D'Investigacions Biomediques August Pi I Sunyer(ES)
- Erasmus University Rotterdam(NL)
- Politecnico di Milano(IT)
- Philips (Belgium)(BE)
- University College London(GB)
- TU Dresden(DE)
- Fresenius (Germany)(DE)
- Elekta (Sweden)(SE)
- Health and Safety Authority(IE)
- University of Oxford(GB)