Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Forewarning Artificial Intelligence about Cognitive Biases
5
Zitationen
2
Autoren
2025
Jahr
Abstract
Artificial intelligence models display human-like cognitive biases when generating medical recommendations. We tested whether an explicit forewarning, "Please keep in mind cognitive biases and other pitfalls of reasoning," might mitigate biases in OpenAI's generative pretrained transformer large language model. We used 10 clinically nuanced cases to test specific biases with and without a forewarning. Responses from the forewarning group were 50% longer and discussed cognitive biases more than 100 times more frequently compared with responses from the control group. Despite these differences, the forewarning decreased overall bias by only 6.9%, and no bias was extinguished completely. These findings highlight the need for clinician vigilance when interpreting generated responses that might appear seemingly thoughtful and deliberate.HighlightsArtificial intelligence models can be warned to avoid racial and gender bias.Forewarning artificial intelligence models to avoid cognitive biases does not adequately mitigate multiple pitfalls of reasoning.Critical reasoning remains an important clinical skill for practicing physicians.
Ähnliche Arbeiten
The Cochrane Collaboration's tool for assessing risk of bias in randomised trials
2011 · 33.546 Zit.
Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019
2020 · 18.432 Zit.
To Err Is Human
2000 · 14.072 Zit.
Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies
2007 · 9.497 Zit.
KDIGO 2024 Clinical Practice Guideline for the Evaluation and Management of Chronic Kidney Disease
2024 · 6.751 Zit.