Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Are artificial intelligence based chatbots reliable sources for patients regarding orthodontics?
2
Zitationen
5
Autoren
2025
Jahr
Abstract
Objectives: The objective of this study was to conduct a comprehensive and patient-centered evaluation of chatbot responses within the field of orthodontics, comparing three prominent chatbot platforms: ChatGPT-4, Microsoft Copilot, and Google Gemini. Material and Methods: Twenty orthodontic-related queries were presented to ChatGPT-4, Microsoft Copilot, and Google Gemini by ten orthodontic experts. To assess the accuracy and completeness of responses, a Likert scale (LS) was employed, while the clarity of responses was evaluated using a Global Quality Scale (GQS). Statistical analyses included One-way analysis of variance and post-hoc Tukey tests to assess the data, and a Pearson correlation test was used to determine the relationship between variables. Results: The results indicated that ChatGPT-4 (1.69 ± 0.10) and Microsoft Copilot (1.68 ± 0.10) achieved significantly higher LS scores compared to Google Gemini (2.27 ± 0.53) ( P < 0.05). However, the GQS scores, which were 4.01 ± 0.31 for ChatGPT-4, 3.92 ± 0.60 for Google Gemini, and 4.09 ± 0.15 for Microsoft Copilot, showed no significant differences among the three chatbots ( P > 0.05). Conclusion: While these chatbots generally handle basic orthodontic queries well, they show significant differences in responses to complex scenarios. ChatGPT-4 and Microsoft Copilot outperform Google Gemini in accurately addressing scenario-based questions, highlighting the importance of strong language comprehension, knowledge access, and advanced algorithms. This underscores the need for continued improvements in chatbot technology.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.