Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Early identification of Family Medicine residents at risk of failure using Natural Language Processing and Explainable Artificial Intelligence
1
Zitationen
7
Autoren
2024
Jahr
Abstract
Abstract Background During residency, each resident is observed and receives feedback based on their performance. Residency training is demanding, with some residents struggling with their academic performance. A competency-based residency training program’s success depends on its ability to identify residents with difficulty during their first year of post-graduate education and to provide them with timely intervention and support. Objective In large training programs such as Family Medicine, identifying residents at risk of failing their certification exams is difficult. We developed an AI system using state-of-the-art technologies in Machine Learning (ML), Deep Learning (DL), Natural Language Processing (NLP) and Explainable AI (XAI) to detect at-risk residents automatically. Materials and Methods The research was conducted in the 2023-24 academic year. We implemented ML, DL and NLP models for prediction and performance analysis. The target variable chosen for the prediction was the determination of whether the resident would fail or pass their certification exam. XAI was used to enhance the understanding of the model’s inner workings. Results In total, there were 1382 data points of residents. The final model, Support Vector Machine (SVM), achieved an accuracy of 89.05% and an F1 score of 74.54 for the multiclass classification when multimodal (text and tabular) data was used. This model outperformed the models that only used qualitative or quantitative data exclusively. Conclusion Combining qualitative and quantitative data represents a novel approach and provided better classification results. This research demonstrates the feasibility of an automated AI system for the early identification of residents at risk of academic struggle. Prior Abstract Presentation Abstract presented at AMEE (An International Association for Medical Education) Conference Basel, Switzerland, August 24-28, 2024.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.