Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Regulating the future of laboratory medicine: European regulatory landscape of AI-driven medical device software in laboratory medicine
6
Zitationen
10
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is rapidly transforming laboratory medicine, impacting medical devices and healthcare practices. Despite these advancements, AI-based medical device software (MDSW) introduces a new layer of complexity in regulatory compliance. This paper outlines the regulatory landscape for MDSW and AI-driven MDSW, clarifying the responsibilities of laboratory professionals and manufacturers under the <i>In Vitro</i> Diagnostic Regulation (IVDR), ISO 15189:2022, and the Artificial Intelligence Act. An analysis of 89 MDSWs approved under the IVDR, derived from the European Database on Medical Devices (EUDAMED) reveals a diverse landscape of applications, ranging from digital pathology and molecular diagnostics to laboratory automation and clinical decision support. While Germany currently dominates the EU market for these devices, and the majority of approved MDSW remain non-AI driven and classified as low-risk, the increasing presence of AI-powered Class C devices underscores the growing potential of software in complex diagnostic scenarios. However, realizing the full potential of AI in laboratory medicine requires careful navigation of the evolving regulatory landscape. Key challenges persist, including defining intended use, ensuring robust clinical evidence, mitigating data bias, and establishing rigorous post-market surveillance. Balancing regulatory oversight with innovation is critical to fostering the development of trustworthy AI systems without stifling progress. As regulatory frameworks continue to evolve, establishing clear validation methodologies and transparent compliance pathways will be essential to unlocking the full potential of AI in laboratory medicine while ensuring the highest standards of safety and clinical effectiveness.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- Ministry of Health(TR)
- Directorate of Health(IS)
- Inserm(FR)
- Université de Montpellier(FR)
- Centre de Référence des Maladies Autoinflammatoires et des Amyloses(FR)
- Hospital Universitario Puerta de Hierro Majadahonda(ES)
- University Clinic of Pulmonary and Allergic Diseases Golnik(SI)
- Ss. Cyril and Methodius University in Skopje(MK)
- Azienda Socio Sanitaria Territoriale degli Spedali Civili di Brescia(IT)
- University of Belgrade(RS)
- University of Padua(IT)
- Radboud University Nijmegen(NL)
- Radboud University Medical Center(NL)
- Streekziekenhuis Koningin Beatrix(NL)