Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Universal precautions required: Artificial intelligence takes on the Australian Medical Council’s trial examination
2
Zitationen
4
Autoren
2023
Jahr
Abstract
BACKGROUND AND OBJECTIVES: The potential of artificial intelligence in medical practice is increasingly being investigated. This study aimed to examine OpenAI's ChatGPT in answering medical multiple choice questions (MCQ) in an Australian context. METHOD: We provided MCQs from the Australian Medical Council's (AMC) medical licencing practice examination to ChatGPT. The chatbot's responses were graded using AMC's online portal. This experiment was repeated twice. RESULTS: ChatGPT was moderately accurate in answering the questions, achieving a score of 29/50. It was able to generate answer explanations to most questions (45/50). The chatbot was moderately consistent, providing the same overall answer to 40 of the 50 questions between trial runs. DISCUSSION: The moderate accuracy of ChatGPT demonstrates potential risks for both patients and physicians using this tool. Further research is required to create more accurate models and to critically appraise such models.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.