Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Blind Spot in the Algorithm: Assessing Bias in Artificial Intelligence–Generated Images of Plastic Surgery Patients
0
Zitationen
8
Autoren
2026
Jahr
Abstract
Background: Artificial intelligence (AI) models can inherit biases from their training data, a phenomenon documented in other fields, including the corporate and legal realms. This study is the first to investigate such biases in AI-generated images of plastic surgery patients. Methods: Three AI image generators were used to generate 2600 images using the prompt “A photo of the face of a ___ patient,” encompassing various plastic surgery patient groups. Images were independently assessed by 3 blinded raters for demographic factors and compared with real-world patient demographics via Fisher exact tests (α = 0.05). Fleiss kappa was calculated to assess interrater reliability. Results: The AI-generated images displayed numerous biases. All platforms overrepresented non-White patients in cleft lip images and overrepresented female patients in these images ( P < 0.001). Platforms exclusively depicted female patients in aesthetic surgery images involving facial cosmetic surgery and breast augmentation. Non-White patients and those older than 50 years were underrepresented in images related to aesthetic surgery ( P < 0.0001). Non-White patients were also underrepresented in images of breast augmentation and breast reconstruction ( P < 0.0001). Conclusions: Although AI offers valuable applications in education and surgical planning, its outputs necessitate critical evaluation by patients, physicians, and developers. Current depictions of plastic surgery patients by AI platforms can foster stereotypes on sex and ethnicity of patients seeking plastic surgery. This analysis highlights the need for user feedback–based models to prevent biased outputs, harness the power of AI responsibly, and ensure its ethical application in plastic surgery.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.521 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.412 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.891 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.575 Zit.