Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
What can reader studies of radiologist use of AI models teach us about adaptation? A signal detection modelling exploration
0
Zitationen
6
Autoren
2026
Jahr
Abstract
A significant component in regulatory approval of diagnostic artificial intelligence (AI) models is the use of reader study evidence, diagnostic performance studies of radiologists on a curated set of images. Retrospective MRMC (multi-reader, multi-case) reader studies of AI use are commonly conducted in the field of medical AI, though they tend to overestimate AI performance and underestimate human performance through the use of aggregated metrics. Further, there has been limited investigation into the impact of diagnostic AI models on radiological decision-making, with few if any cognitive modelling studies undertaken. This has serious implications, as the evidence base needed for implementation of models is lacking verified cognitive research. As such, this paper explores a lost opportunity to model diagnostic reader studies to ask: what can reader studies teach us about radiologist adaptation to AI models?In this study, we used a 2021 reader study of a commercially available Australian diagnostic system by Harrison.ai, consisting of 20 expert radiologists reading 1163 cases. We fit hierarchical signal detection models at the individual and pathology level to a reader study of 20 expert radiologists over 1163 cases and 127 pathologies. Our modelling results indicated that radiologists at the pathology and individual level had increased discriminability with AI, though with a more liberal response bias, leading to higher false positive rates. Further exploratory analysis based on these results indicated that disease coverage rates became homogenised with AI, as well as different AI-generated segmentations potentially influencing correct rejection rates. We provide real-world interpretations of these findings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.496 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.386 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.848 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.562 Zit.