OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 12:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

“Assessing ChatGPT's Performance in Answering Cervical Cancer Screening Questions to ChatGPT-generated Clinical Vignettes: A Pilot Study”

2023·4 Zitationen·Research Square (Research Square)Open Access
Volltext beim Verlag öffnen

4

Zitationen

2

Autoren

2023

Jahr

Abstract

<title>Abstract</title> Objective This research aims to determine the impact of ChatGPT-generated information on the clinical practice of preventive gynecology pertinent to cervical cancer screening in a primary care setting. Using prompt, ChatGPT (GPT-3.5 model) was explored for its ability to construct five different clinical vignettes on cervical cancer screening, each with a single relevant query and subsequent answer based on the current standard of care. All clinical responses were compared with the current standard of care to assess the accuracy. Design This was a qualitative research-based pilot study. Setting Chat Generative Pre-trained Transformer (ChatGPT) model-3.5 was explored to achieve the objective of this study. Participants ChatGPT (model-3.5) was prompted to generate five different clinical vignettes about cervical cancer screening, each followed by a query and subsequent response to the respective query. Results ChatGPT (GPT-3.5 model) was able to provide five clinical vignettes on cervical cancer screening with relevant queries but with answers of variable accuracy. The answer was found to be unsatisfactory for one vignette, acceptable for two, and satisfactory for two when compared with the current standard of care. The model's ability to provide in-depth answers to cervical cancer screening queries in a primary care setting was found to be limited. When asked about citations to information sources, the model could not provide accurate citations initially and provided URL (Uniform Resource Locator) on the fifth attempt but most of them failed to open the relevant pages on their respective websites. Conclusions This study found ChatGPT’s answers with variable accuracy concerning clinical queries related to cervical cancer screening, thus depicting limited ChatGPT performance in this context. There are concerns about the lack of in-depth answers to various questions and accurate citations. ChatGPT can be a valuable tool to augment a physician's clinical judgment if it could provide information from updated evidence-based guidelines. Further research is required to explore its prospects in conjunction with medical informatics while taking measures for safeguarding health data.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics in Clinical ResearchTopic Modeling
Volltext beim Verlag öffnen