Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparison of responses from different artificial intelligence-powered chatbots regarding the All-on-four dental implant concept
16
Zitationen
1
Autoren
2025
Jahr
Abstract
BACKGROUND: Recent advancements in Artificial Intelligence (AI) have transformed the healthcare field, particularly through chatbots like ChatGPT, OpenEvidence, and MediSearch. These tools analyze complex data to aid clinical decision-making, enhancing efficiency in diagnosis, treatment planning, and patient management. When applied in the "All-on-Four" dental implant concept, AI facilitates immediate prosthetic restorations and meets the demand for expert guidance. This integration boosts the long-term success of surgical outcomes by providing real-time support and improving patient education and postoperative satisfaction. This study aimed to evaluate the effectiveness of three AI-powered chatbots-ChatGPT 4.0, OpenEvidence, and MediSearch-in answering frequently asked questions regarding the All-on-Four dental implant concept. METHOD: This study investigated the response accuracy of three AI-powered chatbots to common queries about the All-on-Four dental implant concept. Using alsoasked.com, twenty pertinent questions-ten patient-focused and ten technical-were identified. Oral and maxillofacial surgeons evaluated the chatbot responses using a 5-point Likert scale. Statistical analysis was performed with the Kruskal-Wallis test, supplemented by pairwise Mann-Whitney U tests with Bonferroni correction, to assess the significance of differences among the chatbots' performances. RESULTS: The Kruskal-Wallis test showed statistically significant differences between the three chatbots for both patient and technical questions (p < 0.01). Pairwise comparisons were evaluated using the Mann-Whitney U test. While significant differences were found among each chatbot for patient questions, no significant difference was observed between ChatGPT and MediSearch for technical questions (p = 0.158). When comparing responses of the same chatbot to patient and technical questions, it was found that MediSearch performed better in technical questions (p < 0.001). CONCLUSION: Advancements in technology have made AI-powered chatbots an inevitable influence in specialized medical fields such as Oral, Maxillofacial Surgery. Our findings indicate that these chatbots can provide valuable information for patients undergoing medical procedures and serve as a resource for healthcare professionals.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.