Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How reliable are ChatGPT and Google’s answers to frequently asked questions about unicondylar knee arthroplasty from a scientific perspective?
4
Zitationen
2
Autoren
2025
Jahr
Abstract
IntroductionUnicondylar knee arthroplasty (UKA) is a minimally invasive surgical technique that replaces a specific compartment of the knee joint. Patients increasingly rely on digital tools such as Google and ChatGPT for healthcare information. This study aims to compare the accuracy, reliability, and applicability of the information provided by these two platforms regarding unicondylar knee arthroplasty.Materials and MethodsThis study was conducted using a descriptive and comparative content analysis approach. 12 frequently asked questions regarding unicondylar knee arthroplasty were identified through Google's "People Also Ask" section and then directed to ChatGPT-4. The responses were compared based on scientific accuracy, level of detail, source reliability, applicability, and consistency. Readability analysis was conducted using DISCERN, FKGL, SMOG, and FRES scores.ResultsA total of 83.3% of ChatGPT's responses were found to be consistent with academic sources, whereas this rate was 58.3% for Google. ChatGPT's answers of 142.8 words, compared to Google's 85.6-word average. Regarding source reliability, 66.7% of ChatGPT's responses were based on academic guidelines, whereas Google's percentage was 41.7%. The DISCERN score for ChatGPT was 64.4, whereas Google's was 48.7. Google had a higher FRES score.ConclusionChatGPT provides more scientifically accurate information than Google, while Google offers simpler and more comprehensible content. However, the academic language used by ChatGPT may be challenging for some patient groups, whereas Google's superficial information is a significant limitation. In the future, the development of Artificial Intelligence-based medical information tools could be beneficial in improving patient safety and the quality of information dissemination.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.