Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Susceptibility of Large Language Models to User-Driven Factors in Medical Queries
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Large language models (LLMs) are increasingly used in healthcare; however, their reliability is shaped not only by model design but also by how queries are phrased and how complete the information is. This study assesses how user-driven factors, including misinformation framing, source authority, model personas, and omission of critical clinical details, influence the diagnostic accuracy and reliability of LLM-generated medical responses. Utilizing two public datasets (MedQA and Medbullets), we conducted two tests: (1) perturbation—evaluating LLM persona (assistant vs. expert AI), misinformation source authority (inexperienced vs. expert), and tone (assertive vs. hedged); and (2) ablation—omission of key clinical data. Proprietary LLMs (GPT-4o (OpenAI), Claude-3·5 Sonnet (Anthropic), Claude-3·5 Haiku (Anthropic), Gemini-1·5 Pro (Google), Gemini-1·5 Flash (Google)) and open-source LLMs (LLaMA-3 8B, LLaMA-3 Med42 8B, DeepSeek-R1 8B) were used for evaluation. Results show that in the perturbation test, all LLMs were susceptible to user-driven misinformation, with an assertive tone exerting the strongest overall impact, while proprietary models were more vulnerable to strong or authoritative misinformation. In the ablation test, omitting physical examination findings and laboratory results caused the largest accuracy decline. Proprietary models achieved higher baseline accuracy but demonstrated sharper performance drops under biased or incomplete input. These findings highlight that structured prompts and a complete clinical context are essential for accurate responses. Users should avoid authoritative misinformation framing and provide a complete clinical context, especially for complex and challenging queries. By clarifying the impact of user-driven biases, this study contributes insights to the safe integration of LLMs into healthcare practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.