OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 12:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Can non-specialists provide high quality gold standard labels in\n challenging modalities?

2021·0 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2021

Jahr

Abstract

Probably yes. -- Supervised Deep Learning dominates performance scores for\nmany computer vision tasks and defines the state-of-the-art. However, medical\nimage analysis lags behind natural image applications. One of the many reasons\nis the lack of well annotated medical image data available to researchers. One\nof the first things researchers are told is that we require significant\nexpertise to reliably and accurately interpret and label such data. We see\nsignificant inter- and intra-observer variability between expert annotations of\nmedical images. Still, it is a widely held assumption that novice annotators\nare unable to provide useful annotations for use by clinical Deep Learning\nmodels. In this work we challenge this assumption and examine the implications\nof using a minimally trained novice labelling workforce to acquire annotations\nfor a complex medical image dataset. We study the time and cost implications of\nusing novice annotators, the raw performance of novice annotators compared to\ngold-standard expert annotators, and the downstream effects on a trained Deep\nLearning segmentation model's performance for detecting a specific congenital\nheart disease (hypoplastic left heart syndrome) in fetal ultrasound imaging.\n

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationFetal and Pediatric Neurological DisordersColorectal Cancer Screening and Detection
Volltext beim Verlag öffnen