Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Liability Risks of Ambient Clinical Workflows With Artificial Intelligence for Clinicians, Hospitals, and Manufacturers
4
Zitationen
3
Autoren
2025
Jahr
Abstract
<div>In August 2024, the nation's largest nonprofit integrated health care provider, Kaiser Permanente, announced that clinicians would have access to an ambient clinical documentation scribe: an assisted clinical documentation tool that uses artificial intelligence (AI) to securely summarize relevant medical information from spoken, natural conversations (also called ambient clinical documentation or AI scribes). After automatically summarizing the encounter, the AI scribe sends the summary to the clinician for review. Ambient clinical documentation scribes are now offered by some of the fastest-growing AI companies in health care, with significant venture capital funding and an impressive roster of health system customers.</div> <div> </div> <div>Technologies such as ambient clinical documentation and other generative AI tools may improve care and lessen clinician burnout by reducing documentation burdens. But they also raise the question of who is responsible when AI-generated patient information is inaccurate, especially when those errors cause injury to a patient. This question is particularly acute in cancer care, where there is a unique set of terminology for each of the more than 400 types of cancer, leading to an increased chance of documentation error, and where decisions on the basis of the assumption of information accuracy can be life-altering.</div> <div> </div> <div>AI transcription tools in their current versions are not considered regulated medical devices under the US Federal Food, Drug, and Cosmetic Act. Unless this changes, the responsibility falls to stakeholders other than the US Food and Drug Administration (FDA) to ensure the technology's safety and efficacy. In this article, we analyze the AI governance responsibilities and potential tort liability for clinicians, hospitals, and manufacturers using AI for clinical note-taking and suggest several potential ways to address them.</div>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.