Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
<scp>ChatGPT</scp> in transfusion medicine: A new frontier for patients?
10
Zitationen
1
Autoren
2023
Jahr
Abstract
Artificial intelligence (AI) systems are infiltrating and shaping medicine and health science research.1 One increasingly popular open-access AI application is ChatGPT (Chat Generative Pre-Trained Transformer, OpenAI, San Francisco, California), a large language model that can generate human-like text in response to user questions. The potential implications of this technology are far-reaching, from academic publishing and automated clinical documentation to medical education and knowledge assessment.2-6 Questions remain about how open-access AI models affect how patients engage in their own healthcare. To query ChatGPT's transfusion medicine content, a question was posed on the platform (Jan 30 Version, Free Research Preview8) from the potential perspective of a patient: “What does it mean to have anti-K antibodies?” Since the ChatGPT model is iterative and trained by information from the internet, user feedback, and AI trainer review,7 the response to the original question was regenerated twice using the “regenerate response” button, with each iteration separated by 5 minutes. ChatGPT responded with relevant results. The original response was succinct (Figure 1A). The second response additionally described symptoms related to an anti-K antibody-mediated transfusion reaction but erroneously implied that Rho(D) immune globulin could prevent development of anti-K antibodies during pregnancy (Figure 1B). The third response mentioned the Kell name but incorrectly indicated that the K antigen contributes to the determination of one's ABO blood group (Figure 1C). Clinicians, patients, and individuals who may generate medical education information should be aware of the potential to obtain AI-generated transfusion medicine information, which is currently subject to variability and potential errors. The authors declare no conflict of interest related to this research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.