OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 17:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Bias in respiratory diagnoses by Large Language Models (LLMs) in Low Middle Income Countries (LMICs)

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

Use of Large Language Models (LLMs) created in North America and Europe, could lead to a Western bias if used in Low and Middle Income Countries (LMIC) healthcare settings. Clinicians and patients are likely to increasingly use LLMs for diagnostic support. <bold>Aims:</bold> To explore if diagnostic suggestions made by LLMs are relevant in LMIC healthcare settings. <bold>Methods:</bold> Five short respiratory clinical vignettes were produced. For each vignette, a doctor from one of 4 countries (Ghana, India, Jordan and Brazil) independently gave the 4 most likely diagnoses. 4 LLMs (ChatGPT, Claude Sonnet, Google Gemini and Microsoft Copilot) were prompted with the same vignettes. The top 4 diagnoses for each case was requested. A Virtual Private Network was used so each LLM was accessed from each of the 4 countries. In a second experiment, the LLM was informed of the country in which the case was being seen. LLM output was compared with the doctors’ responses. <bold>Results:</bold> LLMs consistently offered diagnoses irrelevant in the LMIC setting, with little overlap between LLM and doctor diagnoses. Doctor diagnoses were different from LLM diagnoses (see fig). <fig><object-id>erj;66/suppl_69/OA5379/F1</object-id><object-id>F1</object-id><object-id>F1</object-id><graphic></graphic></fig> <bold>Conclusions:</bold> LLMs should not be used as diagnostic aids in LMICs. LLMs are misaligned with the goal of an internationally useful diagnostic aid.

Ähnliche Arbeiten