OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 17:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Correcting crowdsourced annotations to improve detection of outcome types in evidence based medicine

2019·6 Zitationen·International Joint Conference on Artificial Intelligence
Volltext beim Verlag öffnen

6

Zitationen

4

Autoren

2019

Jahr

Abstract

© 2019 for this paper by its authors. The validity and authenticity of annotations in datasets massively influences the performance of Natural Language Processing (NLP) systems. In other words, poorly annotated datasets are likely to produce fatal results in at-least most NLP problems hence misinforming consumers of these models, systems or applications. This is a bottleneck in most domains, especially in healthcare where crowdsourcing is a popular strategy in obtaining annotations. In this paper, we present a framework that automatically corrects incorrectly captured annotations of outcomes, thereby improving the quality of the crowdsourced annotations. We investigate a publicly available dataset called EBM-NLP, built to power NLP tasks in support of Evidence based Medicine (EBM) primarily focusing on health outcomes.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen