Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing Deep Learning Approaches for Predicting Clinical Deterioration Using Chest Radiographs (Preprint)
0
Zitationen
9
Autoren
2024
Jahr
Abstract
<sec> <title>BACKGROUND</title> Early detection of clinical deterioration and timely intervention for hospitalized patients can improve patient outcomes. Existing early warning systems rely on variables from structured data, such as vital signs and laboratory values, and do not incorporate other potentially predictive data modalities. Because respiratory failure is a common cause of deterioration, chest radiographs are often acquired in deteriorating patients, which may be informative for predicting their risk of intensive care unit (ICU) transfer. </sec> <sec> <title>OBJECTIVE</title> To compare and validate different computer vision models and data augmentation approaches with chest radiographs for predicting clinical deterioration. </sec> <sec> <title>METHODS</title> This retrospective observational study included adult patients hospitalized at the University of Wisconsin Health System between 2009 and 2020 with an elevated eCART score, a validated clinical deterioration early warning score, on the medical-surgical wards. Patients with a chest radiograph within 48 hours prior to the elevated score were included in this study. Three computer vision model architectures (VGG16, Densenet121, Vision Transformer) and four data augmentation methods (Histogram Normalization, Random Flip, Random Gaussian Noise, and Random Rotate) were compared using the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) for predicting clinical deterioration (i.e., intensive care unit transfer or ward death in the following 24 hours). </sec> <sec> <title>RESULTS</title> The study included 21,817 patient admissions, of which 1,655 (7.6%) experienced the outcome. Densenet121 model pre-trained on chest radiograph datasets with histogram normalization and random Gaussian noise augmentation had the highest discrimination (AUROC 0.734 and AUPRC 0.414), while vision transformer having 24 transformer blocks with random rotate augmentation had the lowest discrimination (AUROC=0.598). </sec> <sec> <title>CONCLUSIONS</title> The Densenet121 architecture pretrained with chest radiographs performed better than other architectures in most experiments, and the addition of histogram normalization with random Gaussian noise data augmentation may enhance performance for Densenet121 and pre-trained VGG16 architectures. </sec> <sec> <title>CLINICALTRIAL</title> <p/> </sec>
Ähnliche Arbeiten
Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study
2020 · 22.609 Zit.
La certeza de lo impredecible: Cultura Educación y Sociedad en tiempos de COVID19
2020 · 19.271 Zit.
A Multi-Modal Distributed Real-Time IoT System for Urban Traffic Control (Invited Paper)
2024 · 14.253 Zit.
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
2018 · 8.498 Zit.
Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
2021 · 7.114 Zit.