Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Abstract 15898: Appropriateness of Cardiology Clinic Notes Transcribed by a Popular, Publicly Available Artificial Intelligence Model (GPT-4)
0
Zitationen
6
Autoren
2023
Jahr
Abstract
Introduction: Artificial intelligence large language models (LLMs) such as GPT-4 have gained millions of users and public attention. GPT-4 may have the potential to support medical workflows, especially clinical note transcription. However, whether GPT-4 can appropriately transcribe simple cardiology clinical encounters into notes has not been evaluated. Research Question Can GPT-4 accurately and appropriately transcribe clinic encounters of simple cardiology presentations into medical notes? Methods: Five hypothetical encounters were created as detailed dialogue between a physician and patient, covering subjective and objective data, assessment, plan, and counseling. Encounters covered presentations of stable angina, primary prevention, secondary prevention, familial hypercholesterolemia, and acute pericarditis. GPT-4 was prompted to transcribe each encounter using three prompts: 1) to transcribe a medical note; 2) to transcribe a medical note and add additional relevant recommendations; and 3) to transcribe a medical note and add additional relevant recommendations with reference to specific AHA guidelines. Each note was graded by a board-certified cardiologist at a tertiary center as appropriate (accurate transcription and recommendations) or inappropriate (inaccurate transcription or recommendations). Results: With prompts 1 and 2, all GPT-4 notes were classified as appropriate (Table). With prompt 3, all notes were transcribed accurately but had inaccurate references to AHA guidelines and were classified as inappropriate. Conclusion: GPT-4 has the potential to appropriately transcribe dialogue-based clinic encounters of simple cardiology conditions to medical notes. However, beyond direct transcription, supportive guidelines or references provided by GPT-4 for encounters may be inappropriate. Efforts to develop LLMs for cardiology clinical encounter transcriptions should account for these limitations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.