Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Machine learning for identifying relevant publications in updates of systematic reviews of diagnostic test studies
13
Zitationen
4
Autoren
2021
Jahr
Abstract
Updating systematic reviews is often a time-consuming process that involves a lot of human effort and is therefore not conducted as often as it should be. The aim of our research project was to explore the potential of machine learning methods to reduce human workload. Furthermore, we evaluated the performance of deep learning methods in comparison to more established machine learning methods. We used three available reviews of diagnostic test studies as the data set. In order to identify relevant publications, we used typical text pre-processing methods. The reference standard for the evaluation was the human-consensus based on binary classification (inclusion, exclusion). For the evaluation of the models, various scenarios were generated using a grid of combinations of data preprocessing steps. Moreover, we evaluated each machine learning approach with an approach-specific predefined grid of tuning parameters using the Brier score metric. The best performance was obtained with an ensemble method for two of the reviews, and by a deep learning approach for the other review. Yet, the final performance of approaches strongly depends on data preparation. Overall, machine learning methods provided reasonable classification. It seems possible to reduce human workload in updating systematic reviews by using machine learning methods. Yet, as the influence of data preprocessing on the final performance seems to be at least as important as choosing the specific machine learning approach, users should not blindly expect a good performance by solely using approaches from a popular class, such as deep learning.
Ähnliche Arbeiten
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
2021 · 87.288 Zit.
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
2009 · 82.928 Zit.
The Measurement of Observer Agreement for Categorical Data
1977 · 77.350 Zit.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement
2009 · 63.121 Zit.
Measuring inconsistency in meta-analyses
2003 · 61.787 Zit.