Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Just because you’re paranoid doesn’t mean they won’t side with the plaintiff: Examining perceptions of liability about AI in radiology
3
Zitationen
5
Autoren
2024
Jahr
Abstract
Abstract Background Artificial Intelligence (AI) will have unintended consequences for radiology. When a radiologist misses an abnormality on an image, their liability may differ according to whether or not AI also missed the abnormality. Methods U.S. adults viewed a vignette describing a radiologist being sued for missing a brain bleed (N=652) or cancer (N=682). Participants were randomized to one of five conditions. In four conditions, they were told an AI system was used. Either AI agreed with the radiologist, also failing to find pathology (AI agree) or did find pathology (AI disagree). In the AI agree+FOR condition, AI agreed with the radiologist and an AI false omission rate (FOR) of 1% was presented. In the AI disagree+FDR condition, AI disagreed and an AI false discovery rate (FDR) of 50% was presented. There was also a no AI control condition. Otherwise, vignettes were identical. Participants indicated whether the radiologist met their duty of care as a proxy for whether they would side with defense (radiologist) or plaintiff in trial. Results Participants were more likely to side with the plaintiff in the AI disagree vs. AI agree condition (brain bleed: 72.9% vs. 50.0%, p=0.0054; cancer: 78.7% vs. 63.5%, p=0.00365) and in the AI disagree vs. no AI condition (brain bleed: 72.9% vs. 56.3%, p=0.0054; cancer: 78.7% vs. 65.2%, p=0.00895). Participants were less likely to side with the plaintiff when FDR or FOR were provided: AI disagree vs AI disagree+FDR (brain bleed: 72.9% vs. 48.8%, p=0.00005; cancer: 78.7% vs. 73.1%, p=0.1507), and AI agree vs. AI agree+FOR (brain bleed: 50.0% vs. 34.0%, p=0.0044; cancer: 63.5% vs. 56.4%, p=0.1085). Discussion Radiologists who failed to find an abnormality are viewed as more culpable when they used an AI system that detected the abnormality. Presenting participants with AI accuracy data decreased perceived liability. These findings have relevance for courtroom proceedings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.