Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Entrustment and EPAs for Artificial Intelligence (AI): A Framework to Safeguard the Use of AI in Health Professions Education
10
Zitationen
7
Autoren
2024
Jahr
Abstract
In this article, the authors propose a repurposing of the concept of entrustment to help guide the use of artificial intelligence (AI) in health professions education (HPE). Entrustment can help identify and mitigate the risks of incorporating generative AI tools with limited transparency about their accuracy, source material, and disclosure of bias into HPE practice. With AI's growing role in education-related activities, like automated medical school application screening and feedback quality and content appraisal, there is a critical need for a trust-based approach to ensure these technologies are beneficial and safe. Drawing parallels with HPE's entrustment concept, which assesses a trainee's readiness to perform clinical tasks-or entrustable professional activities-the authors propose assessing the trustworthiness of AI tools to perform an HPE-related task across 3 characteristics: ability (competence to perform tasks accurately), integrity (transparency and honesty), and benevolence (alignment with ethical principles). The authors draw on existing theories of entrustment decision-making to envision a structured way to decide on AI's role and level of engagement in HPE-related tasks, including proposing an AI-specific entrustment scale. Identifying tasks that AI could be entrusted with provides a focus around which considerations of trustworthiness and entrustment decision-making may be synthesized, making explicit the risks associated with AI use and identifying strategies to mitigate these risks. Responsible, trustworthy, and ethical use of AI requires health professions educators to develop safeguards for using it in teaching, learning, and practice-guardrails that can be operationalized via applying the entrustment concept to AI. Without such safeguards, HPE practice stands to be shaped by the oncoming wave of AI innovations tied to commercial motivations, rather than modeled after HPE principles-principles rooted in the trust and transparency that are built together with trainees and patients.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.