Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Liability in AI-Enabled Clinical Decision Support: Toward a Tiered Responsibility Model
0
Zitationen
2
Autoren
2025
Jahr
Abstract
AI-enabled clinical decision support (CDS) is increasingly embedded in diagnosis and care pathways, yet liability remains unclear when recommendations cause harm. This paper offers a structured, actionable approach. We synthesize liability regimes across the United States, European Union, and Asia-Pacific and highlight gaps specific to adaptive, software-driven systems. We ground the analysis in ethical risks-accountability shifts, automation bias, and power asymmetries-that should shape how responsibility is shared. We then introduce a quantitative Tiered Liability Model that allocates total harm among developers, physicians, and hospitals using policy weights and three measurable indices: developer culpability, physician oversight, and AI explainability. We prove basic feasibility and monotonicity properties of the allocation, explain how to operationalize the indices with audits, EHR-based oversight signals, and logging/traceability, and provide a regulatory crosswalk that maps regional doctrines to these indices. Two scenario analyses-a pneumonia misdiagnosis and a hospital operations system that contributes to understaffing-illustrate how the model yields fair, incentive-aligned allocations and how improvements in explainability and oversight shift liability. To support adoption, we include a ready-to-use procurement scorecard with measurable evidence and pass/fail gates, together with an implementation roadmap and regulatory levers (metrics standardization, safe harbors, post-market surveillance). The result is a coherent framework that protects patients, sustains innovation, and gives regulators, providers, and vendors a common language for assigning responsibility.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.