OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 14:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating the Human Safety Net: Observational study of Physician Responses to Unsafe AI Recommendations in high-fidelity Simulation

2023·4 ZitationenOpen Access
Volltext beim Verlag öffnen

4

Zitationen

5

Autoren

2023

Jahr

Abstract

ABSTRACT In the context of Artificial Intelligence (AI)-driven decision support systems for high-stakes environments, particularly in healthcare, ensuring the safety of human-AI interactions is paramount, given the potential risks associated with erroneous AI outputs. To address this, we conducted a prospective observational study involving 38 intensivists in a simulated medical setting. Physicians wore eye-tracking glasses and received AI-generated treatment recommendations, including unsafe ones. Most clinicians promptly rejected unsafe AI recommendations, with many seeking senior assistance. Intriguingly, physicians paid increased attention to unsafe AI recommendations, as indicated by eye-tracking data. However, they did not rely on traditional clinical sources for validation post-AI interaction, suggesting limited “debugging.” Our study emphasises the importance of human oversight in critical domains and highlights the value of eye-tracking in evaluating human-AI dynamics. Additionally, we observed human-human interactions, where an experimenter played the role of a bedside nurse, influencing a few physicians to accept unsafe AI recommendations. This underscores the complexity of trying to predict behavioural dynamics between humans and AI in high-stakes settings.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationElectronic Health Records SystemsTelemedicine and Telehealth Implementation
Volltext beim Verlag öffnen