Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
COVID-VIT: Classification of Covid-19 from 3D CT chest images based on vision transformer model
59
Zitationen
7
Autoren
2022
Jahr
Abstract
This paper presents an explainable deep learning network to classify COVID from non-COVID based on 3D CT lung images. It applies a subset of the data for MIA-COV19 challenge through the development of 3D form of Vision Transformer deep learning architecture. The data comprise 1924 subjects with 851 being diagnosed with COVID, among them 1,552 being selected for training and 372 for testing. While most of the data volume are in axial view, there are a number of subjects’ data are in coronal or sagittal views with 1 or 2 slices are in axial view. Hence, while 3D data based classification is investigated, in this competition, 2D axial-view images remains the main focus. Two deep learning methods are studied, which are vision transformer (ViT) based on attention models and DenseNet that is built upon conventional convolutional neural network (CNN). Initial evaluation results indicates that ViT performs better than DenseNet with F1 scores being 0.81 and 0.72 respectively. (Codes are available at GitHub at https://github.com/xiaohong1/COVID-ViT). This paper illustrates that vision transformer performs the best in comparison to the other current state of the art approaches in classification of COVID from CT lung images.
Ähnliche Arbeiten
La certeza de lo impredecible: Cultura Educación y Sociedad en tiempos de COVID19
2020 · 19.284 Zit.
A Multi-Modal Distributed Real-Time IoT System for Urban Traffic Control (Invited Paper)
2024 · 14.294 Zit.
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
2018 · 8.752 Zit.
Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
2021 · 7.370 Zit.
scikit-image: image processing in Python
2014 · 6.818 Zit.