Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the role of ChatGPT in decision making for gender-affirming surgery
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Aim: Large language models (LLMs) like ChatGPT have transformed access to health information. For transgender individuals considering gender-affirming surgery (GAS), accurate and reliable information is essential for informed decision making. This study aimed to quantitatively assess the use of ChatGPT among individuals considering GAS and its impact on their decision-making process. Methods: A cross-sectional survey was conducted in January 2024 on Prolific. Participants included English-speaking U.S. users over 18 whose current gender differed from their assigned gender at birth. The survey collected demographic information, evaluated interest in GAS, and examined interactions with ChatGPT. Descriptive statistics were used for analysis. Results: The study included 207 participants (average age 30.2 years), primarily identifying as non-binary (40.6%), transgender men (29.5%), and transgender women (13%). Most expressed interest in GAS (89%). Primary information sources for GAS were online forums (24.6%), medical websites (21.3%), and social media (17.4%). While many had used ChatGPT (73%), few utilized it for GAS information (6.7%). Among those who did, the majority (70%) rated its usefulness as moderate to slight, with some reporting a positive influence on their decision making (40%). Trust in ChatGPT’s information was moderate to highly rated by 80% of participants. Conclusions: In our cohort, ChatGPT is less commonly used for GAS information than online forums and medical websites. This suggests that patients prefer platforms that offer visual content, human interaction, and relatability. These findings highlight the importance of guiding patients toward reliable health information sources, such as healthcare providers, reputable medical websites, and academic literature, to support informed decision making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.