Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence in Action: Racial and Gender Disparities in Academic Radiology
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Academic radiology continues to face persistent gender and racial disparities in career advancement. The emergence of generative artificial intelligence (AI) platforms offers new opportunities to analyze workforce diversity patterns rapidly and at scale. This study aimed to evaluate the interpretative capacity of three generative AI platforms (i.e., ChatGPT, DeepSeek, and Perplexity) in identifying disparities in academic rank and tenure status across gender and racial subgroups in academic radiology. The outputs of these AI models were compared with conventional human-led analyses for accuracy, limitations, and potential biases. We prompted each AI model to analyze publicly available American Association of Medical Colleges Faculty Roster data on tenure and rank distribution by gender and race using standardized query templates. Outputs were systematically evaluated for consistency, accuracy, and potential biases against benchmark human-curated studies. Comparative analysis included variations between AI platforms and traditional research methods, with particular attention to how each model interpreted and reported disparities. The AI models broadly recognized trends in faculty growth and underrepresentation, but interpretations varied. Perplexity and DeepSeek provided more granular insights, such as declining tenure rates and intersectional disparities, while ChatGPT offered less detailed analyses. Key discrepancies included divergent temporal trends and policy recommendations, highlighting AI's limitations in capturing nuanced sociodemographic patterns. Generative AI shows promise in analyzing workforce disparities but requires validation to mitigate biases and inconsistencies. When used alongside traditional methods, AI can enhance understanding of inequities in academic medicine, provided that its outputs are critically evaluated for fairness and accuracy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.