Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing the readability of human- and AI-written informed consent forms for provisional dental restorations
1
Zitationen
2
Autoren
2025
Jahr
Abstract
Aims: This study aimed to evaluate the readability of informed consent forms for provisional crowns and bridges by comparing a human-written version with AI-generated texts produced by two large language models (LLMs): GPT-4o (OpenAI) and Claude 3.7 Sonnet (Anthropic). Methods: A three-page informed consent form authored by a prosthodontic specialist was used as a human-written reference. Using identical structured prompts, comparable consent forms were generated by GPT-4o and Claude 3.7 Sonnet. Specifically, the models were instructed to first explain the clinical purpose of provisional dental restorations and then generate a three-page patient-oriented informed consent form, avoiding unnecessary technical jargon and adopting the tone of a prosthodontic specialist. The prompts guided the models to address each section sequentially, including: title of the form, patient identification, introductory statement, treatment and procedures, expected benefits, expected outcomes without treatment, treatment alternatives, possible risks and complications, estimated duration of the procedure, and signature section. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) metric, along with descriptive comparisons of word count, sentence count, and passive voice percentage. Results: The human-written form consisted of 1158 words, achieved an FKGL score of 10.8, and contained 34.5% passive voice. The GPT-4o form showed 956 words, an FKGL of 12.6, and 20.4% passive voice. The Claude 3.7 Sonnet form had 1338 words, an FKGL of 14.7, and 35% passive voice. These results revealed marked differences in document length, sentence count, and passive voice usage, with the AI-generated texts displaying more complex sentence structures and higher reading grade levels. Conclusion: Although all forms exceeded the recommended readability level for patient-facing documents, the AI-generated versions-particularly the Claude 3.7 Sonnet form-were more difficult to read due to greater length and more complex sentence structure. These results underscore the importance of human oversight in editing and simplifying AI-generated materials, ensuring they meet the readability standards essential for patient comprehension.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.