OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 01:24

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Large language models provide unsafe answers to patient-posed medical questions

2026·1 Zitationen·npj Digital MedicineOpen Access
Volltext beim Verlag öffnen

1

Zitationen

17

Autoren

2026

Jahr

Abstract

Millions of patients are regularly using large language model (LLM) chatbots for medical advice, raising patient safety concerns. This physician-led red-teaming study compares the safety of four publicly available chatbots-Claude by Anthropic, Gemini by Google, GPT-4o by OpenAI, and Llama-3.0/3.1-70B by Meta-on a new dataset, HealthAdvice, using an evaluation framework that enables quantitative and qualitative analysis. In total, 888 chatbot responses are evaluated for 222 patient-posed advice-seeking medical questions on primary care topics spanning internal medicine, women's health, and pediatrics. We find statistically significant differences between chatbots. The rate of problematic responses varies from 21.6% (Claude) to 43.2% (Llama), with unsafe responses varying from 5% (Claude) to 13% (GPT-4o, Llama). Qualitative results reveal chatbot responses with the potential to lead to serious patient harm. This study suggests that millions of patients could be receiving unsafe medical advice from publicly available chatbots, and further work is needed to improve the clinical safety of these powerful tools.

Ähnliche Arbeiten