Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Why Artificial Intelligence Must Be Part of Dental Residency Curriculum
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Artificial intelligence has become part of modern clinical life, influencing diagnosis, documentation, and even communication with patients. In dentistry, its early use highlights both opportunity and caution. Studies show that AI can support accuracy and efficiency while underscoring the need for critical interpretation and validation [1, 2]. Health professions organizations—including the World Health Organization (WHO), the American Medical Association (AMA), and the Association of American Medical Colleges (AAMC)—increasingly describe AI literacy as a professional competency [3-5]. These perspectives invite clinicians to understand not only what AI predicts, but how it does so—and when it merits human review. Within dentistry, adoption remains uneven. Recent reports suggest that less than one in five U.S. dental programs include formal AI-related content, and most emphasize introductory exposure rather than measurable competency [6, 7]. This represents an opportunity to move from awareness to applied understanding. Dentistry's strong visual and procedural foundation makes it well-suited to exploring the relationship between digital systems and human skill. Residency training, where clinicians learn through guided independence, offers a natural setting to develop discernment, curiosity, and ethical reasoning around AI. Predoctoral programs, often limited by curricular crowding, may introduce AI conceptually—through topics such as data integrity or bias—but residency programs can bring these ideas to life in clinical practice. This bridge between knowledge and application makes postgraduate education a meaningful space for exploring the responsible use of AI in patient care. The discussion that follows organizes AI integration into four domains—each representing a practical dimension of learning and implementation. Together, they illustrate why AI matters for residency education and how it can be introduced thoughtfully and sustainably. Table 1 outlines these domains and their key educational outcomes, while Figure 1 depicts how AI modalities connect with training processes and assessment cycles. Why: AI can enhance diagnostic consistency, reduce interpretation variability, and identify trends that inform prevention and treatment. Systems for radiographic analysis, caries-risk prediction, periodontal bone-loss detection, and treatment-planning support are emerging across academic and clinical environments. Used responsibly, they heighten clinical awareness and help residents correlate imaging data with chart findings and periodontal measures. The learning opportunity lies in cultivating disciplined curiosity—questioning how, why, and when AI arrives at conclusions. Overreliance on algorithms (automation bias) or disregard of validated outputs (omission bias) can each pose risks to safety; deliberate reflection helps maintain balance. How: AI applications can be incorporated directly into clinical instruction. Residents may review algorithm-flagged findings alongside faculty interpretations during morning huddles, assess discrepancies, and discuss underlying causes. Case-based discussions that compare AI-supported diagnoses with verified outcomes can reinforce calibration and clinical reasoning. Decision-support tools may also be used to model treatment-planning scenarios, illustrating how data inputs influence predictions. Integrating these activities within ongoing quality-improvement efforts demonstrates that technology contributes most effectively when paired with verification, teamwork, and ethical awareness. Why: Understanding how algorithms operate—and where they can mislead—forms the foundation of responsible AI use. Didactic learning builds conceptual literacy, while simulation transforms that knowledge into judgment under realistic conditions. When combined, they foster reflection, calibration, and patient-centered decision-making. How: Implementation can follow a modular and scaffolded design, linking foundational understanding to applied reasoning. Didactics may include short lectures, interactive tutorials, and case-based discussions that highlight bias detection, data provenance, and the limits of automation. Simulation—digital, haptic, or mixed-reality—offers opportunities to practice diagnostic reasoning while comparing human and AI-generated findings [8]. Accessible platforms now make such training feasible at scale through public–private collaborations. Free or low-cost AI-literacy courses from Coursera, edX, and the NIH's Bridge2AI initiative provide shared content that educators can adapt locally. Within federal systems, the VA Greater Los Angeles Healthcare System developed the Artificial Intelligence in Dentistry series on TRAIN—adapted from a national symposium that reached nearly 700 participants across disciplines. More than 96% of attendees said they would recommend the program, and 88% identified ethical implications of AI in care. This example illustrates how public–academic–federal partnerships can expand AI literacy responsibly and inclusively [9]. Assessment: Evaluation may balance cognitive and behavioral dimensions. Cognitive assessments could include brief reflections on AI-assisted cases or structured oral examinations exploring algorithm interpretation and reasoning. Behavioral measures can draw from simulation analytics—accuracy, calibration, and adaptation patterns—paired with narrative feedback from mentors. Together, these approaches capture both knowledge and judgment, tracing a learner's progression from familiarity to discernment. Why: Faculty interpret and contextualize innovation. Their confidence in emerging technologies shapes how effectively AI is taught, questioned, and modeled. Many educators express interest, yet lack formal exposure to AI concepts or practical frameworks for teaching them. Building faculty competence in this area promotes consistency, equity, and reflection. How: AI literacy can be incorporated into existing professional development programs to minimize duplication and broaden access. Peer-led workshops, interdisciplinary seminars with ethics or data science faculty, and collaborative case demonstrations can link AI concepts to educational practices such as calibration and feedback. Publicly available micro-credential programs and open-access materials may further supplement institutional offerings. Equity remains central. Faculty across diverse sites and specialties may benefit from shared access to AI tools, teaching resources, and continuing education. This inclusive approach prevents technological divides and fosters a collaborative culture where innovation grows collectively rather than unevenly. Why: Governance transforms innovation into accountability. As AI becomes integrated into diagnostics, operations, and education, oversight helps maintain transparency, fairness, and trust. Without structured review, even well-intentioned efforts may introduce inconsistency, bias, or privacy concerns. How: Interdisciplinary governance groups—including educators, clinicians, data scientists, and ethicists—can review AI tools before adoption, document validation, and clarify oversight roles. Institutions may develop policies outlining how AI-assisted findings are verified, how data are stored, and how human review complements automation. At broader levels, collaboration among organizations such as the ADA, ADEA, and academic consortia can harmonize guidance, reducing fragmentation across programs. Aligning institutional policies with frameworks from WHO, AMA, and AAMC offers shared ethical grounding and coherence [10]. Governance functions best as a living process. Periodic audits, feedback sessions, and open “AI rounds” can invite discussion of lessons learned, cultivating a culture of transparency and continuous improvement rather than compliance. Residency education bridges supervision and autonomy, making it a natural setting for reflective, evidence-based engagement with AI. Within this stage, learners can explore new tools safely, guided by mentorship that emphasizes judgment as much as efficiency. Integrating AI into training is less about technology than about cultivating awareness—of data, of bias, and of the clinician's role in interpreting both. As predoctoral and residency programs evolve together, they can help shape a generation of dentists who view AI not as an authority, but as an instrument of inquiry and care. Artificial intelligence will continue to influence how dentistry is learned and practiced. Its real value lies in how thoughtfully it is applied—through evidence, empathy, and conscience.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.