Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring <scp>GPT</scp>‐4.0's efficiency in handling paediatric appendicitis questions
0
Zitationen
5
Autoren
2024
Jahr
Abstract
Abstract Objective To explore the potential and accuracy of the generative dialogue artificial intelligence tool GPT‐4.0 in answering questions related to paediatric emergency appendicitis. Methods A cross‐sectional observational study design was used. We collected 134 appendicitis‐related questions from authoritative websites, such as Mayo Medical and APSA, covering all aspects of appendicitis, including simple and complex questions. These questions were answered by GPT‐4.0, and then evaluated by three paediatric surgical experts using a quality score ranging from 0 to 5. The answers were generated by GPT‐4.0 and then similarly evaluated by three experts for accuracy. Results We found that GPT‐4.0 could achieve a high accuracy rate on simple questions with a quality score of 4.65 (standard deviation 0.51). For complex questions, the average score was 3.77 (standard deviation 0.68), and there was a significant difference between the two ( P < .05). On clinical questions, the accuracy score of GPT‐4.0 was 4.00 (standard deviation 0.21). When answering actual questions from families of children with appendicitis, the accuracy score was 4.12 (standard deviation 0.59). Its accuracy lies between simple questions and complex questions, and it can basically meet the accuracy requirements of clinical questions. It's worth noting that GPT‐4.0 demonstrated empathy in answering some questions, which might further enhance patient satisfaction. Conclusion GPT‐4.0 showed its potential and accuracy in handling paediatric appendicitis questions, especially in simple and clinical questions. However, improvements are still needed in handling complex questions and updating information. Despite the limitations, this model is expected to improve the quality of medical services and enhance patient satisfaction.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.