OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 10:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

P-547 Artificial Intelligence-simplified information to advance reproductive genetic literacy and equitable care

2025·0 Zitationen·Human ReproductionOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2025

Jahr

Abstract

Abstract Study question We evaluated the effectiveness of four large language models (LLMs), including GPT-3.5, GPT-4, Copilot, and Gemini, in simplifying patient education materials (PEMs) on reproductive genetics. Summary answer This study demonstrates the potential of LLMs to enhance the readability of complex PEMs, particularly for low-literacy patients, to reduce healthcare disparities and facilitate decision-making. What is known already Genomic medicine is revolutionizing healthcare by integrating genetic data to personalize prevention, diagnosis, and treatment. Reproductive genetic testing—including preconception, preimplantation, and prenatal testing—supports this approach but remains underutilized due to complex medical language and low health literacy. Studies show that many patients struggle to make informed decisions due to insufficient comprehension and inadequate resources. While digital resources expand access to health information, existing PEMs often exceed recommended readability levels. LLMs, such as ChatGPT, offer a promising solution by simplifying complex information, improving health literacy, and promoting equitable healthcare access. Study design, size, duration We conducted a comparative observational study from April to November 2024, evaluating four LLMs—GPT-3.5, GPT-4, Copilot, and Gemini—in simplifying 30 reproductive genetic PEMs. We assessed readability improvements using seven standardized metrics and analyzed textual characteristics, such as word count and passive voice usage. To assess accuracy, we engaged 30 reproductive genetics experts who evaluated LLM-generated texts using a blinded review. Participants/materials, setting, methods We sourced 30 original PEMs from reputable healthcare websites, covering six reproductive genetic topics. Each text was simplified using four LLMs with a fixed prompt. To measure readability, we applied validated metrics and normalized scores for comparability. We engaged 30 reproductive genetics experts to evaluate accuracy, completeness, and omission relevance of simplified texts using a blinded review on Qualtrics. We performed statistical analyses, including Kruskal-Wallis tests, to compare LLM performance across readability and content integrity. Main results and the role of chance We found that all LLMs significantly improved readability, reducing complexity to below an 8th-grade reading level (p < 0.001). Gemini achieved the greatest readability improvements (mean FRE score: 79.1 ±7.4, p = 2.8 x10-15), followed closely by Copilot (79.5 ±7.1, p = 1.5 x10-15). Word count, long sentence prevalence, and passive voice usage decreased across all models, with the largest reductions observed in Gemini and GPT-4. Our expert evaluations revealed significant performance differences (p = 2.2 x10-16). GPT-4 had the highest accuracy (4.1 ±0.9), completeness (4.2 ±0.8), and omission scores (4.0 ±0.9), while Gemini had the lowest across all categories. Although Gemini simplified texts effectively, it often omitted critical medical content. GPT-4 provided the best balance between readability and content integrity, ensuring accessibility without sacrificing essential information. These findings highlight the trade-off between simplification and accuracy, reinforcing the importance of human oversight in AI-generated patient education materials. Limitations, reasons for caution Our study focused on English-language PEMs, limiting generalizability to other languages. We did not assess patient comprehension of the AI-simplified texts, so further research is needed to evaluate real-world patient engagement and decision-making before integrating LLM-generated PEMs into clinical practice. Wider implications of the findings This study highlights the potential of AI/LLMs in simplifying PEMs to enhance reproductive health literacy, promote equity, and inform responsible AI integration in clinical practice. Our approach demonstrates a scalable, patient-centered framework for improving health literacy and accessibility, with broader applicability across medical disciplines to advance equitable healthcare delivery. Trial registration number No

Ähnliche Arbeiten