Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Using Large Language Models to Identify Patient–Oncologist Communication Domains: A Feasibility Study
0
Zitationen
5
Autoren
2026
Jahr
Abstract
BACKGROUND: The American Society of Clinical Oncology (ASCO) convened a multidisciplinary panel in 2017, resulting in patient-oncologist communication guidelines. Ideally, these conversations should be documented in the medical records. However, chart review for communication topics is inefficient. Large language models (LLMs) present a computational method for identification of communication domains in clinical notes, subsequently providing feedback for clinicians. OBJECTIVES: The purpose of this study was to develop an approach using LLMs to identify communication domains in unstructured free text notes, validating against gold-standard chart review. SETTING/SUBJECTS: The study population included 134 clinical notes from 30 patients with advanced cancer seen in June 2024 at one of seven Dana-Farber Cancer Institute clinics (Boston, MA). We used a HIPAA-secure artificial intelligence tool based on GPT-4o to develop an LLM prompt for identification of communication domains. MEASUREMENTS: We used standard performance metrics to compare the LLM prompt to chart review for all six communication domains. A hallucination index was calculated to assess false information that may be produced by LLMs when applied to large data sets. RESULTS: Across communication domains, compared to chart review, the note-level LLM analysis achieved sensitivity ranging from 0.43 to 1.0, specificity ranging from 0.32 to 0.99, and accuracy ranging from 0.51 to 0.99. The average hallucination index for all domains was low. LLM abstraction required approximately 7 seconds per note, compared to 5-7 minutes with chart review. CONCLUSION: LLMs have the potential to identify ASCO communication domains. Future directions include applying the method for quality improvement efforts, such as generating feedback for oncologists on topics that may require follow-up.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.