Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Optimizing ChatGPT’s Interpretation and Reporting of Delirium Assessment Outcomes: An Exploratory Study (Preprint)
0
Zitationen
7
Autoren
2023
Jahr
Abstract
<sec> <title>BACKGROUND</title> Generative artificial intelligence (AI) and large language models, such as OpenAI's ChatGPT, have shown promising potential in supporting medical education and clinical decision making, given their vast knowledge base and natural language processing capabilities. As a general-purpose AI, ChatGPT is capable of completing a wide range of tasks including differential diagnosis without additional training. However, the specific application of ChatGPT in learning and applying a series of specialized, context-specific tasks mimicking the workflow of a human assessor, such as administering a standardized assessment questionnaire, followed by inputting assessment results in a standardized form, and interpretating assessment results strictly following credible, published scoring criteria, have not been thoroughly studied. </sec> <sec> <title>OBJECTIVE</title> This exploratory study aimed to (1) evaluate ChatGPT’s ability in learning and administering a standardized informant-based delirium assessment tool, specifically the Sour Seven Questionnaire, via content-specific training; and (2) optimize ChatGPT’s interpretation and reporting of the assessment results using a prompt engineering approach. </sec> <sec> <title>METHODS</title> Using prompt engineering, we provided context-specific training to ChatGPT-3.5 and ChatGPT-4, guiding the models to learn the assessment tool and subsequently identify and score delirium symptoms in clinical vignettes. Performance was compared with human expert scores, followed by iterative prompt optimization to minimize inconsistencies and errors. </sec> <sec> <title>RESULTS</title> Both ChatGPT models demonstrated promising proficiency in applying the Sour Seven Questionnaire to the vignettes, despite initial inconsistencies and errors. Performance notably improved through iterative prompt engineering, enhancing the models’ capacity to detect delirium symptoms and assign scores. Prompt optimizations included adjusting the scoring methodology to accept only definitive 'Yes' or 'No' responses, revising the evaluation prompt to mandate responses in a tabular format, and guiding the models to adhere to the two recommended actions specified in the Sour Seven Questionnaire. </sec> <sec> <title>CONCLUSIONS</title> Our findings provide preliminary evidence supporting the potential utility of AI models like ChatGPT in administering standardized clinical assessment tools. The results highlight the significance of context-specific training and prompt engineering in harnessing the full potential of these AI models for healthcare applications. Despite the encouraging results, broader generalizability and further validation in real-world settings warrant additional research. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.