Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A medical algorithmic audit framework for evaluating the safety, equity, and quality of an AI Scribe tool in a paediatric developmental assessment clinic
0
Zitationen
10
Autoren
2025
Jahr
Abstract
ABSTRACT Background Any tool that can reduce the administrative burden on healthcare providers while preserving safe, accountable and high-quality medical documentation is of immense value both to healthcare institutions and consumers. The key question we need to answer is whether a prospective tool can reduce these burdens while maintaining (and, ideally, elevating ) quality documentation standards. The goal of this study is to establish whether a large language model (LLM)-based documentation assistive tool can maintain safe, high-quality documentation while reducing the time required to ensure high-quality clinical documentation in the Child Development Unit (CDU) at the Women’s and Children’s Hospital (WCH). Methods Using an algorithmic audit framework developed specific to our context, we will compare clinician-written clinical notes to AI-generated notes produced in parallel to the standard of care (i.e., a ‘silent’ or translational trial paradigm). We will compare the time required to review clinical documentation per the standard of care compared with the AI-supported workflow with consideration to the accuracy of the final documentation. Finally, we will qualitatively describe AI-generated notes and compare them to the current standard to identify specific areas where clinical guidelines (e.g., performance information, risk mitigation) would support appropriate clinical use. Significance The contributions of our protocol are twofold. First, we close a key gap in the literature which has limited attention thus far to measuring the real-world clinical utility of AI Scribes. Second, we adapt an established framework for medical algorithmic auditing to the specific use case of LLM-enabled AI Scribes. Our multi-disciplinary research team comprising clinicians, consumers, Aboriginal Health representatives, and an ethicist is unique within this research field and will allow us to generate insights from these diverse perspectives that together can provide constructive guidance for clinicians using and testing AI Scribe tools.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.