Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Which Platform Delivers More Accurate Responses to Common Questions About GLP1RA Therapy: Large Language Models or Internet Searches? (Preprint)
0
Zitationen
3
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Novel glucagon-like peptide 1 receptor agonists (GLP1RAs) for obesity treatment have generated much dialogue on digital media platforms. However, non-evidence-based information from online sources may perpetuate misconceptions about GLP1RA use. A promising new digital avenue for patient education is large language models (LLMs), which could potentially be used as an alternative to clarify questions about GLP1RA therapy. </sec> <sec> <title>OBJECTIVE</title> This study compared LLM (ChatGPT 4o) and internet (Google) search responses to simulated questions about GLP1RA therapy. </sec> <sec> <title>METHODS</title> Responses were graded by 2 independent evaluators based on Safety, Consensus with Guidelines, Objectivity, Reproducibility, Relevance and Explainability using a 5-point Likert Scale. Mean scores were compared using independent T-test. Qualitative observations were recorded. </sec> <sec> <title>RESULTS</title> LLM responses had significantly higher mean scores than Internet responses in the "objectivity" (3.91 ± 0.63 vs 3.36 ± 0.80, p=0.038) and "reproducibility" (3.85 ± 0.49 vs 3.00 ± 0.97, p=0.007) categories. There was no significant difference in the mean scores in "safety", "consensus", “relevance” and 'explainability". However, LLM responses lacked updated information pertaining to more contemporary concerns surrounding GLP1RA use such as the impact on fertility and mental health. </sec> <sec> <title>CONCLUSIONS</title> The study highlights the importance of healthcare provider communication, as both LLM and internet searches have limitations and may perpetuate misconceptions about GLP1RAs. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.