OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 10:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Accessing AI mammography reports impacts patient interest in pursuing a medical malpractice claim: The unintended consequences of including AI in patient portals

2024·4 ZitationenOpen Access
Volltext beim Verlag öffnen

4

Zitationen

7

Autoren

2024

Jahr

Abstract

ABSTRACT Background Artificial intelligence (AI) tools are increasingly used in breast imaging and radiology more broadly. Patients express varying levels of trust and acceptance toward the incorporation of AI tools, although no research has examined how it can best be communicated in the patient portal setting. Methods English-speaking US women with 1+ prior mammogram were recruited via Prolific and randomized to one of thirteen conditions. All participants were shown a vignette asking them to imagine receiving a BI-RADS 1 (Negative) radiologist report from their patient portal. Participants in twelve conditions also received an AI report with one of four AI abnormality scores (not flagged: 0, 29; flagged: 31, 50) and 0-2 accompanying features (nothing; a only; a & b): (a) an abnormality cutoff threshold; (b) the AI tool’s False Discovery Rate (FDR) or False Omission Rate (FOR). As the primary outcome, participants indicated whether they would consider a lawsuit if a one-year follow-up found evidence of Stage 3 breast cancer. Secondary outcomes included hypothetical decisions regarding follow-up (e.g., second opinions), concern for breast cancer, and desire for additional imaging. Results Participants (n=1,623) were more likely to consider a lawsuit when AI was (versus was not) provided, p=0.001. However, for most AI abnormality scores, providing the abnormality cutoff threshold and FDR/FOR reduced lawsuit consideration relative to the AI abnormality score alone. Concern for breast cancer, desire for additional imaging, and follow-up requests (same radiologist, different radiologist, and ordering physician) increased as the AI abnormality score increased, though this was also often mitigated by providing the FDR (and sometimes FOR). Conclusion Disclosing AI feedback used for medical decision-making will impact patients’ perceptions and behaviors pertaining to malpractice and follow-up. Best practices are needed to engage and inform patients about the application of AI tools in their care while minimizing its unintended negative consequences.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiology practices and educationMedical Malpractice and Liability Issues
Volltext beim Verlag öffnen