Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Promise and Perils of Artificial Intelligence in Health Professions Education Practice and Scholarship
26
Zitationen
5
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI) methods, especially machine learning and natural language processing, are increasingly affecting health professions education (HPE), including the medical school application and selection processes, assessment, and scholarship production. The rise of large language models over the past 18 months, such as ChatGPT, has raised questions about how best to incorporate these methods into HPE. The lack of training in AI among most HPE faculty and scholars poses an important challenge in facilitating such discussions. In this commentary, the authors provide a primer on the AI methods most often used in the practice and scholarship of HPE, discuss the most pressing challenges and opportunities these tools afford, and underscore that these methods should be understood as part of the larger set of statistical tools available.Despite their ability to process huge amounts of data and their high performance completing some tasks, AI methods are only as good as the data on which they are trained. Of particular importance is that these models can perpetuate the biases that are present in those training datasets, and they can be applied in a biased manner by human users. A minimum set of expectations for the application of AI methods in HPE practice and scholarship is discussed in this commentary, including the interpretability of the models developed and the transparency needed into the use and characteristics of such methods.The rise of AI methods is affecting multiple aspects of HPE including raising questions about how best to incorporate these models into HPE practice and scholarship. In this commentary, we provide a primer on the AI methods most often used in HPE and discuss the most pressing challenges and opportunities these tools afford.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.197 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.047 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.410 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.