Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence Deployment of Conversational Support (AI-DOCS): A patient acceptability study
0
Zitationen
15
Autoren
2025
Jahr
Abstract
ABSTRACT This study evaluated an artificial intelligence (AI) system based on a large language model (LLM) in conducting complex patient assessments, comparing its acceptability to that of human basic physician trainees (BPTs) during a divisional clinical examination (DCE). Compared with BPTs, Artificial Intelligence Deployment of Conversational Support (AI-DOCS) scored similarly empathy, politeness, and comprehensiveness to human trainees. AI-DOCS also elicited important clinical information that was not divulged to human examiners. Further research is needed to explore the integration of AI-DOCS into clinical workflows.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.