OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 04:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

EP24.14: Explainable AI to support examiners for abnormality detection in fetal cardiac ultrasound screening

2023·1 Zitationen·Ultrasound in Obstetrics and GynecologyOpen Access
Volltext beim Verlag öffnen

1

Zitationen

10

Autoren

2023

Jahr

Abstract

The artificial intelligence (AI) might support medical diagnosis. However, we cannot understand the rationale for the AI's decision. The aim of this study is to make it clear whether explainable AI improve examiner detection rates in fetal echocardiographic screening. The total data set consisted of 160 cases and 344 videos (18-34 weeks), of which 13 CHD cases and 26 videos were abnormal data. All the videos were taken by scanning from the abdominal view to the three-vessel trachea view. The graph chart shows the detection status of the normal substructures of the heart and vessels in the video as two-dimensional trajectory of dots. The shape created by the trajectory of the dots allows us to estimate whether the case is normal or has CHD. First, the examiner assigned an abnormality score (examiner) to 40 randomly arranged videos (10 normal and 20 abnormal cases). Next, the examiner assigned another abnormality score (examiner+AI) to the same 40 fetal echocardiography videos using the graph chart and the abnormality score (AI) calculated from the graph chart. We calculated the accuracy, false-positive rate (FPR), precision, recall, and F1 scores for an abnormality score of 0.5 in addition to the AUC of the ROC curve. The experts, fellows, and residents only had mean accuracies of 0.928, 0.775, and 0.603, respectively. The experts + AI, fellows + AI, and residents + AI had accuracies of 0.938, 0.823, and 0.731, respectively. This graph chart diagram exhibited a massive enhancement of screening performance in use by examiners of all experience levels. The co-operation between examiners and explainable AI is key to clinical application. Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.

Ähnliche Arbeiten