Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing AI Chatbots to Live Practitioners of Homeopathy: A Comparative Retrospective Study
0
Zitationen
5
Autoren
2026
Jahr
Abstract
<b>Background/Objectives</b>: The use of artificial intelligence (AI) to elicit health advice is a rapidly developing phenomenon that could dramatically change healthcare delivery, including in the field of homeopathy. However, the potential costs and benefits of this shift are largely unknown. <b>Methods</b>: Researchers studied whether there was a difference between homeopathy guidance provided by large language model (LLM) AI chatbots and live practitioners for acute illnesses. This study used practitioner notes from 100 cases to elicit remedy recommendations from four free, publicly accessible AI chatbots. The results were compared against live practitioners' initial remedy recommendations across different AI platforms and a purpose-built (non-LLM) homeopathic remedy finder, and subsequent queries on the same AI platforms using the same input. <b>Results</b>: AI chatbots regularly provided medical disclaimers, including recommendations to seek medical care, and provided remedy recommendations that were sometimes consistent with a live practitioner's initial recommendation. In the 100 cases compared, the initial practitioner-recommended remedy was included among the AI chatbots' recommendations in 36.5% (<i>N</i> = 100) of the cases on average, and was the top recommendation in 20.8% (n = 100) of the cases. In a small minority of cases (6%, where <i>N</i> = 100), all four AI chatbots agreed with the practitioner's initial recommendation, and in a slightly larger minority (10% where <i>N</i> = 100), all four AI chatbots agreed on a remedy that was at odds with the practitioner's initial recommendation, indicating potential areas for further investigation. <b>Conclusions</b>: AI chatbot remedy recommendations were not routinely consistent with a live practitioner's initial recommendation or across AI platforms. Results were not even routinely consistent when the same case notes were entered multiple times on the same platform or when challenged by a researcher.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.