OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 08:52

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Abstract 4366319: Evaluating ChatGPT as a Patient Education Tool for Cardiovascular Medications: High Quality, Low Accessibility

2025·1 Zitationen·Circulation
Volltext beim Verlag öffnen

1

Zitationen

5

Autoren

2025

Jahr

Abstract

Introduction: Cardiovascular diseases (CVDs) are the leading cause of death globally, yet medication nonadherence remains high. Artificial intelligence (AI) tools may help improve adherence by counseling patients about their medications. However, concerns about usability and general mistrust of AI-generated health content remain. Assessing AI responses to common patient questions can inform future patient education strategies. Research Question: How does ChatGPT perform in accuracy, completeness, and readability when responding to patient questions about medications used for CVDs? Methods: A standardized set of 11 questions was developed across three categories: therapeutic effects/usage, side effects, and lifestyle changes (Figure 1). Questions were sequentially input into ChatGPT-4o for commonly prescribed CVD drug classes (Figure 2). The AI-generated responses were compiled into a Qualtrics survey and evaluated independently by an internal medicine attending, a board-certified cardiologist, and a cardiac ICU pharmacist. Each rated accuracy and completeness on an 8-point Likert scale (1 = “not at all accurate/complete”; 8 = “extremely accurate/complete”). Flesch-Kincaid Grade Level and Reading Ease scores were calculated to provide accuracy-independent metrics of response quality. Group-wise comparisons evaluated differences in accuracy, completeness, and readability between categories. Results: ChatGPT-generated responses were rated highly for accuracy (mean = 7.89) and completeness (mean = 7.80) across all questions (Table 1). However, the average reading level was at college-grade, far above the recommended 6th-8th grade level for patient materials, indicating a mismatch with typical health literacy levels and poor overall readability. Category-specific analyses yielded that questions regarding therapeutic effects/usage were significantly easier ( p< .001 ) to read than those on side effects or lifestyle changes, potentially due to the relatively subjective nature of the latter two categories. Reviewers also noted key features missing from some responses, including pre-procedural medication counseling, dosing guidance, and layperson-friendly language. Conclusion: ChatGPT responses were overall considered accurate and complete, but low readability scores suggest the information may be inaccessible for the average patient. Future AI tools should prioritize plain language and patient-centered design to enhance accessibility without sacrificing content quality.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationCardiovascular Health and Risk FactorsMedication Adherence and Compliance
Volltext beim Verlag öffnen