Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Regulatory challenges and liability pathways for AI-powered robots in EU medical practice
0
Zitationen
5
Autoren
2026
Jahr
Abstract
This study explores the legal and ethical implications of introducing AI-powered robots into medical practice within the European Union (EU). It employs an interdisciplinary methodology that combines survey responses from medical professionals across 20 countries, expert interviews, literature review, and legal analysis. The findings identify a significant gap in professional awareness regarding existing legal frameworks. The analysis focuses in particular on several key EU regulations, including the Artificial Intelligence Act (AI Act), the General Data Protection Regulation (GDPR), the Medical Device Regulation (MDR), and the revised Product Liability Directive (PLD). It further identifies eight key factors that influence liability, including the explainability of AI decisions, the quality of training data, cybersecurity vulnerabilities, and evolving dynamics in the doctor–patient relationship. Building on recent academic debates, the study proposes a dual framework for assigning liability: one that incorporates both ex-ante (preventive) and ex-post (remedial) mechanisms. Alongside this dual framework, the study further proposes liability models that translate these principles into practical mechanisms for regulating AI-powered robots deployed in medical practice. This model is grounded in a link-in-the-chain approach to liability, distributing liability among all actors involved in the development, deployment, and use of AI-powered robots. The study concludes that a harmonized, context-sensitive legal framework is urgently needed. Such a framework should clarify roles and responsibilities across the AI ecosystem, promote collaborative governance, and provide targeted training for medical professionals. Ultimately, ensuring the safe and effective integration of AI-powered robots in medical practice will depend on coordinated regulatory action and a shared commitment to transparency, accountability, and patient safety.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.