Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Gender Bias in Artificial Intelligence‐Written Letters of Reference
8
Zitationen
6
Autoren
2024
Jahr
Abstract
While ChatGPT-generated LORs all showed a male bias in the language used, there was no gender bias difference in letters produced using traditionally masculine versus feminine names and pronouns. Other variables did induce gendered language, however. ChatGPT is a promising tool for LOR drafting, but users must be aware of potential biases introduced or propagated through these technologies.
Ähnliche Arbeiten
The Association of American Medical Colleges
1938 · 5.571 Zit.
Burnout and Satisfaction With Work-Life Balance Among US Physicians Relative to the General US Population
2012 · 3.326 Zit.
Physician burnout: contributors, consequences and solutions
2018 · 2.471 Zit.
Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review
2015 · 2.077 Zit.
Burnout Among U.S. Medical Students, Residents, and Early Career Physicians Relative to the General U.S. Population
2014 · 1.980 Zit.