Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Racial Disparities in Knowledge of Cardiovascular Disease by a Chat-Based Artificial Intelligence Model
4
Zitationen
4
Autoren
2023
Jahr
Abstract
Abstract Background Patients and their families often explore the information available online for information about health status. A dialogue-based artificial intelligence (AI) language model (ChatGPT) has been developed for complex question and answer. We sought to assess whether AI model had knowledge of cardiovascular disease (CVD) racial disparities, including disparities associated with CVD risk factors and associated diseases. Methods To assess ChatGPT’s responses to topic in cardiovascular disease disparities, we created six questions of twenty sets, with each set consisting of three questions with changes to text prompt based on differences in patient demographics. Each question was asked three times to assess variability in responses. Results A total of 180 responses were tabulated from ChatGPT’s responses to 60 questions asked in triplicate to assess. Despite some variation in wording, all responses to the same prompt were consistent across different sessions. ChatGPT’s responses to 63.4% of the questions (38 out of 60 questions) were appropriate, 33.3% (20 out of 60 questions) were inappropriate and 3.3% unreliable (2 out of 60 questions). Of the 180 prompt entries into ChatGPT, 141 (78.3%) were correct, 28 (15.5%) were hedging or indeterminate responses that could not be binarized into a correct or incorrect response, and 11 (6.1%) were incorrect. There were consistent themes in incorrect or hedging responses by ChatGPT, with 91% and 79% of incorrect and hedging responses related to a cardiovascular disease disparity that affects a minority or underserved racial group. Conclusion Our study showed that an online chat-based AI model has a broad knowledge of CVD racial disparities, however persistent gaps in knowledge about minority groups. Given that these models might be used by the general public, caution should be advised in taking responses at face value.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.