Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI in healthcare: Regulatory guidelines and judge-made negligence principles for AI implementers
1
Zitationen
1
Autoren
2025
Jahr
Abstract
The use of artificial intelligence (AI) in healthcare may, notwithstanding its potential benefits, result in harm to patients from allegedly negligent acts or omissions by hospitals and medical doctors. In such circumstances, how should the principles in the tort of negligence (duty of care, breach, causation, remoteness of damage, and defences) respond to AI innovations in healthcare? In particular, how may the standard of care expected of hospitals and medical doctors be informed by regulatory guidelines? We refer to case law precedents and regulatory guidelines on the roles and responsibilities of doctors and hospitals as AI implementers. Importantly, they prompt further reflection and consideration as to how regulatory guidelines can impact the application of judge-made principles in negligence in connection with, for example, the reliance on medical AI in clinical practice, the disclosure of AI usage and risks to patients and the challenges posed by the opacity and non-explainability of medical AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.