Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing the suitability of ChatGPT in responding to public inquiries about dental crown restorations
0
Zitationen
6
Autoren
2025
Jahr
Abstract
AIM: This study aimed to assess the effectiveness of ChatGPT as an informational tool for patients considering dental crown restorations. METHODS: Common patient inquiries about dental crown restorations were identified through online tools: Also Asked, Answer the Public, and Google's People Also Ask. The questions were categorized into three groups: Group 1 - functionality and appearance, Group 2 - material differences and cost, and Group 3 - longevity, shortcomings, and maintenance. ChatGPT 3.5 was used to generate responses to these questions. Responses were evaluated for usefulness; readability (FKG and SMOG indices); quality (Global Quality Scale); reliability (CLEAR tool); and understandability and actionability (PEMAT). Statistical analyses included ANOVA, Kruskal-Wallis, Mann-Whitney U, and Spearman correlation tests, performed using SPSS v20. RESULTS: A total of 126 patient questions about dental crown restorations were analyzed. Most ChatGPT responses were rated as 'very useful' (77%) or 'useful' (20.6%), with responses on longevity, shortcomings, and maintenance receiving the highest usefulness ratings. Readability analyses indicated that approximately 53% of responses were difficult to read overall, though responses on longevity and maintenance were easier to read. Quality and reliability were high across all responses, with mean GQS and CLEAR scores of 4.70/5 and 24.37/25, respectively. Understandability and actionability (PEMAT) scores varied, with responses on longevity and maintenance performing best. CONCLUSION: Our results suggest that ChatGPT may serve as a supplementary tool in addressing patient inquiries about dental crown restorations. However, its responses require improvement in readability, actionability, and technical specificity. These findings indicate that while AI tools like ChatGPT can complement patient education, they cannot replace professional dental consultation and guidance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.578 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.470 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.984 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.