OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 09.04.2026, 23:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Accuracy and inclusiveness of health insurance information in generative artificial intelligence.

2025·0 Zitationen·Journal of Clinical Oncology
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

e13723 Background: GenerativeAI (genAI) has the potential to revolutionize how cancer patients seek information. As more cancer survivors look to genAI for guidance on health insurance topics, it is critical to evaluate the accuracy and inclusiveness of genAI generated responses to questions about health insurance. Methods: Prompts were constructed based on the content of an NCI-funded health insurance literacy patient navigation intervention (CHAT-S). In CHAT-S, participants meet with a navigator over four 30-minute sessions to cover information on health insurance terms/processes, specifics of their health insurance, healthcare laws, information about appeals, tips for budgeting, and financial resources. Topics from these sessions were converted into 13 unique AI prompts (e.g., what is preauthorization, how do I file an appeal, etc.). Each prompt was put into Microsoft Copilot. A codebook was applied to the genAI responses to systematically evaluate accuracy and inclusiveness on a scale of 0-2 (0 = no meaningful difference, 1 = appropriate slight difference, 2 = meaningful difference). To assess accuracy, genAI content was compared against the CHAT-S intervention content and evaluated for incorrect information, lack of assertive language, and missing context. To assess inclusiveness, content was evaluated for dehumanizing language and Flesch reading ease. Results: Across all 13 genAI responses, context was consistently lacking across all genAI responses (Mean: 2, Standard Deviation: 0). While every response included appropriate information, there was important information included in the booklet that was absent in the genAI response. For example, when defining a deductible, genAI did not include that it resets annually. Overall, the main content presented in CHAT-S and the genAI responses were consistent with each other (1.2, 0.44) When content differed, it was in specificity. For example, the booklet provided names and contact information for financial resources, whereas genAI linked to resource databases where a survivor could find additional resources independently. Assertiveness was appropriate across all genAI responses (1.2, 0.44). The language used by genAI was inclusive in sentiment; however, not in reading level. The average Flesch reading ease across responses was more challenging than recommended for health educational materials (Mean 11 th grade; recommended 6 th grade). Notably, genAI responses did not produce any inaccurate information. Conclusions: GenAI has potential to guide and inform cancer survivors on health insurance topics. In this explorative study, genAI responses included helpful steps to guide patients in understanding and using their health insurance. While we used broad prompts about insurance, future studies should evaluate the ability of genAI to generate tailored recommendations based on individuals’ specific scenarios.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Healthcare Systems and Public HealthArtificial Intelligence in Healthcare and EducationImpact of AI and Big Data on Business and Society
Volltext beim Verlag öffnen