Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human-AI Interaction in the ScreenTrustCAD Trial: Recall Proportion and Positive Predictive Value Related to Screening Mammograms Flagged by AI CAD versus a Human Reader
11
Zitationen
4
Autoren
2025
Jahr
Abstract
Background The ScreenTrustCAD trial was a prospective study that evaluated the cancer detection rates for combinations of artificial intelligence (AI) computer-aided detection (CAD) and two radiologists. The results raised concerns about the tendency of radiologists to agree with AI CAD too much (when AI CAD made an erroneous flagging) or too little (when AI CAD made a correct flagging). Purpose To evaluate differences in recall proportion and positive predictive value (PPV) related to which reader flagged the mammogram for consensus discussion: AI CAD and/or radiologists. Materials and Methods Participants were enrolled from April 2021 to June 2022, and each examination was interpreted by three independent readers: two radiologists and AI CAD, after which positive findings were forwarded to the consensus discussion. For each combination of readers flagging an examination, the proportion recalled and the PPV were calculated by dividing the number of pathologic evaluation-verified cancers by the number of positive examinations. Results The study included 54 991 women (median age, 55 years [IQR, 46-65 years]), among whom 5489 were flagged for consensus discussion and 1348 were recalled. For examinations flagged by one reader, the proportion recalled after flagging by one radiologist was larger (14.2% [263 of 1858]) compared with flagging by AI CAD (4.6% [86 of 1886]) (<i>P</i> < .001), whereas the PPV of breast cancer was lower (3.4% [nine of 263] vs 22% [19 of 86]) (<i>P</i> < .001). For examinations flagged by two readers, the proportion recalled after flagging by two radiologists was larger (57.2% [360 of 629]) compared with flagging by AI CAD and one radiologist (38.6% [244 of 632]) (<i>P</i> < .001), whereas the PPV was lower (2.5% [nine of 360] vs 25.0% [61 of 244]) (<i>P</i> < .001). For examinations flagged by all three readers, the proportion recalled was 82.6% (400 of 484) and the PPV was 34.2 (137 of 400). Conclusion A larger proportion of participants were recalled after initial flagging by radiologists compared with those flagged by AI CAD, with a lower proportion of cancer. ClinicalTrials.gov Identifier: NCT04778670 © RSNA, 2025 See also the editorial by Grimm in this issue.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.