OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 21:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Generative Artificial Intelligence Should Not Compromise Traditional Medical Education

2024·5 Zitationen·Academic MedicineOpen Access
Volltext beim Verlag öffnen

5

Zitationen

2

Autoren

2024

Jahr

Abstract

To the Editor: Ongoing discussions about generative artificial intelligence (AI) tools (e.g., ChatGPT) have highlighted their role in potentially modernizing medical education. However, it is critical to anticipate the challenges that accompany this new AI adoption. There tends to be insufficient focus on the deficits of ChatGPT and other large language models, such as their tendency to form “hallucinations,” or false or misleading information generated by an AI model. Experts have downplayed these inaccuracies as minor hiccups that will be viewed as laughable errors of the past.1 Yet, the consequences of spreading medical misinformation are potentially profound. Since generative AI is proficient in its use of authoritative-sounding prose, the spread of misinformation and the potential fostering of medical students to have cognitive complacency cannot be understated. Historical inequities can be perpetuated and immortalized if biased data are used to create AI algorithms that will inform clinical decisions and continue to plague how different population groups are treated.2,3 The assumed corrective trajectory of generative AI over time does not reverse its present-day fabrications and future distortion of evidentiary truth that may exacerbate health disparities. This underscores the importance of ensuring that students achieve a solid foundation in medical knowledge so that they can discern and correct generative AI’s outputs to mitigate overreliance on AI-assisted learning outcomes. Moreover, medical educators have touched on the budding ability of generative AI to be more empathetic than its human-physician counterparts.1,4 This may reflect a more profound issue in medical education: the need for enhanced empathy training. Recognizing AI’s empathic capabilities should signal the importance of fostering these vital interpersonal communication skills, along with continued attention to cultural competency and enriching humility. While acknowledging the irreplaceable value of patient-physician interactions, some educators envision generative AI as enhancing these interactions. Here, lessons from prior adoption of technologies warrant consideration. It was assumed that electronic health records would streamline clinical care, but this inadvertently distanced physicians from their patients.5 As we steer toward integrating generative AI into medical education, preserving the integrity of patient-physician relationships in a compassionate and unbiased manner is paramount. Medical educators should strive to adopt AI advancements while retaining emphasis on traditional training to critically evaluate AI-generated information without compromising the focus on empathy and effective patient-doctor communication. Generative AI holds the potential to augment clinical encounters and ultimately improve patient outcomes. Realizing this goal will require a new generation of well-trained and competent medical graduates who can drive future improvements and the accuracy of AI algorithms over time. Yulin Hswen, ScD, MPH Assistant professor, Department of Epidemiology and Biostatistics, University of California San Francisco, San Francisco, California; email: [email protected]; ORCID: https://orcid.org/0000-0003-3203-1322 Thu T. Nguyen, ScD, MSPH Associate professor, University of Maryland School of Public Health, Department of Epidemiology and Biostatistics, College Park, Maryland; ORCID: https://orcid.org/0000-0003-1185-045X

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationBiomedical and Engineering EducationHealthcare cost, quality, practices
Volltext beim Verlag öffnen