OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 17:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

33. The Hidden Curriculum behind AI-Generated Patient Imagery

2025·0 Zitationen·Plastic & Reconstructive Surgery Global OpenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2025

Jahr

Abstract

BACKGROUND: As artificial intelligence (AI) is rapidly being integrated into plastic surgery, it plays a crucial role in informing patient and trainees. However, AI models can inherit biases from their training data, potentially leading to misrepresentations of patient populations. This study investigates the presence of such biases in AI-generated images of plastic surgery patients, highlighting the need for critical evaluation and education among trainees and patients. METHODS: Three AI image generators—Midjourney, DreamStudio (Stability AI), and Leonardo AI—were used to generate a total of 2,600 images using the prompt “a photo of the face of a ___ patient,” encompassing various patient categories: facial plastic surgery (i.e., cleft lip, facial cosmetic surgery), breast surgery (i.e., augmentation, reconstruction), and burn surgery. Images were independently assessed by three blinded raters for demographic factors, presence of personal protective equipment (PPE), and presence of lesions. Real-world patient demographics were sourced from authoritative databases, including The Aesthetic Society, the National Birth Defects Prevention Network, and TriNetX. Statistical analysis involved Fleiss’ kappa for inter-rater reliability and Fisher’s exact tests to compare AI-generated image demographics with real-world data. RESULTS: Significant biases were identified in the AI-generated images across various surgical categories. Non-white patients were overrepresented in cleft lip images generated by Midjourney and Stability AI. None of the models accurately depicted cleft lip, even when prompts specified “before” and “after” surgical repair. This was particularly evident in facial cosmetic surgery subjects, where all platforms predominantly generated images of young white women as facial cosmetic surgery patients, despite real-world data indicating that 70.9% of this patient population is aged 50 or older and 13.1% are male. All platforms significantly underrepresented non-white breast augmentations patients and failed to generate any images of subjects above age 50. Non-white patients were also consistently underrepresented in breast reconstruction images. Additionally, they were more likely to be depicted in PPE compared to augmentation patients on Midjourney and Stability AI. Likewise, these platforms were more likely to depict cleft lip subjects with PPE in post-surgical images compared to pre-surgical images. CONCLUSION: This study raises concerns about the accurate representation of diverse patient populations in AI-driven imagery, carrying the potential to influence trainee perceptions and patient expectations. Educating plastic surgery trainees on these biases is crucial to foster critical evaluation of AI tools and promote responsible image selection for patient education. Patients should be informed about the pitfalls of generative AI to ensure they are not misled by unrepresentative portrayals and do not develop unrealistic expectations

Ähnliche Arbeiten