Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Medical Students’ Attitudes toward AI in Medicine and their Expectations for Medical Education
8
Zitationen
5
Autoren
2023
Jahr
Abstract
Abstract Objectives Artificial intelligence (AI) is used in a variety of contexts in medicine. This involves the use of algorithms and software that analyze digital information to make diagnoses and suggest adapted therapies. It is unclear, however, what medical students know about AI in medicine, how they evaluate its application, and what they expect from their medical training accordingly. In the study presented here, we aimed at providing answers to these questions. Methods In this survey study, we asked medical students about their assessment of AI in medicine and recorded their ideas and suggestions for considering this topic in medical education. Fifty-eight medical students completed the survey. Results Almost all participants were aware of the use of AI in medicine and had an adequate understanding of it. They perceived AI in medicine to be reliable, trustworthy, and technically competent, but did not have much faith in it. They considered AI in medicine to be rather intelligent but not anthropomorphic. Participants were interested in the opportunities of AI in the medical context and wanted to learn more about it. They indicated that basic AI knowledge should be taught in medical studies, in particular, knowledge about modes of operation, ethics, areas of application, reliability, and possible risks. Conclusions We discuss the implications of these findings for the curricular development in medical education. Medical students need to be equipped with the knowledge and skills to use AI effectively and ethically in their future practice. This includes understanding the limitations and potential biases of AI algorithms by teaching the sensible use of human oversight and continuous monitoring to catch errors in AI algorithms and ensure that final decisions are made by human clinicians.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.291 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.535 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.