OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 29.03.2026, 04:57

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Missing Link: Understanding Human-AI Interaction in FDA/CE-Approved Radiology Tools

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

AI-based tools are increasingly developed for clinical radiology, offering potential gains in diagnostic accuracy, consistency, and workflow efficiency. Yet adoption remains limited, as many systems lack support for effective human-AI collaboration within complex diagnostic workflows. To address this gap, we visually analysed 17 FDA- and/or CE-certified commercial radiology AI systems, examining how their interface designs operational ize human-AI interaction across three key dimensions: diagnostic task coverage, information richness of AI explanations, and human control mechanisms. Using visual diagramming, typology-based task mapping, and cross-system comparison, we characterize when and how AI outputs are introduced, how understandable they are communicated, and how clinicians can influence or correct AI behaviour. Our analysis shows that most systems augment isolated subtasks, such as detection, quantification, and annotation, while higher-order or multi-phase workflow support remains rare. AI explanations typically combine categorical, numerical, and visual outputs but rarely make underlying reasoning transparent, leaving interpretive responsibility to clinicians. Control mechanisms vary in depth and frequency, ranging from single-step initiation to multi-stage walk throughs, yet few systems support iterative engagement and oversight of AI output. These findings reveal significant variation and fragmentation in current design practices, emphasizing the need for standardized frameworks to evaluate and guide human-AI interaction in clinical tools. Future work should link interaction design dimensions, such as control granularity and explanation richness, to safety, usability, and adoption outcomes, ensuring that AI systems enhance rather than constrain clinician expertise and agency in diagnostic decision-making.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Ethics and Social Impacts of AI
Volltext beim Verlag öffnen