Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can artificial intelligence ( <scp>AI</scp> ) educate your patient? A study to assess overall readability and pharmacists' perception of <scp>AI</scp> ‐generated patient education materials
7
Zitationen
4
Autoren
2024
Jahr
Abstract
Abstract Introduction Pharmacists are critical in providing safe and accurate education to patients on disease states and medications. Artificial intelligence (AI) has the capacity to generate patient education materials at a rapid rate, potentially saving healthcare resources. However, overall accuracy and comfort with these materials by pharmacists need to be assessed. Objective The purpose of this study was to assess the accuracy, readability, and likelihood of using AI‐generated patient education materials for ten common medications and disease states. Method s AI (Chat Generative Pre‐Trained Transformer [ChatGPT] v3.5) was used to create patient education materials for the following medications or disease states: apixaban, Continuous Glucose Monitoring (CGM), the Dietary Approaches to Stop Hypertension (DASH) Diet, enoxaparin, hypertension, hypoglycemia, myocardial infarction, naloxone, semaglutide, and warfarin. The following prompt, “Write a patient education material for…” with these medications or disease states being at the end of the prompt, was entered into the ChatGPT (OpenAI, San Francisco, CA) software. A similar prompt, “Write a patient education material for…at a 6th‐grade reading level or lower” using the same medications and disease states, was then completed. Ten clinical pharmacists were asked to review and assess the time it took them to review each educational material, make clinical and grammatical edits, their confidence in the clinical accuracy of the materials, and the likelihood that they would use them with their patients. These education materials were assessed for readability using the Flesh‐Kincaid readability score. Results A total of 8 pharmacists completed both sets of reviews for a total of 16 patient education materials assessed. There was no statistical difference in any pharmacist assessment completed between the two prompts. The overall confidence in accuracy was fair, and the overall readability score of the AI‐generated materials decreased from 11.65 to 5.87 after reviewing the 6th‐grade prompt ( p < .001). Conclusion AI‐generated patient education materials show promise in clinical practice, however further validation of their clinical accuracy continues to be a burden. It is important to ensure that overall readability for patient education materials is at an appropriate level to increase the likelihood of patient understanding.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.