Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Pediatric dermatologists versus <scp>AI</scp> bots: Evaluating the medical knowledge and diagnostic capabilities of <scp>ChatGPT</scp>
12
Zitationen
6
Autoren
2024
Jahr
Abstract
This study evaluates the clinical accuracy of OpenAI's ChatGPT in pediatric dermatology by comparing its responses on multiple-choice and case-based questions to those of pediatric dermatologists. ChatGPT's versions 3.5 and 4.0 were tested against questions from the American Board of Dermatology and the "Photoquiz" section of Pediatric Dermatology. Results show that human pediatric dermatology clinicians generally outperformed both ChatGPT iterations, though ChatGPT-4.0 demonstrated comparable performance in some areas. The study highlights the potential of AI tools in aiding clinicians with medical knowledge and decision-making, while also emphasizing the need for continual advancements and clinician oversight in using such technologies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.551 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.942 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.