OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 00:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Whole-Body Bone Scan Segmentation Using SegFormer

2023·7 Zitationen
Volltext beim Verlag öffnen

7

Zitationen

3

Autoren

2023

Jahr

Abstract

Semantic segmentation can help in everyday life, especially in the medical field, to help detect cancer metastasis at an early stage. In semantic segmentation, CNN-based approaches have been known to dominate the semantic segmentation field, such as FCN and DeepLabv3+. The success of the Transformer approach in the Nature Language Processing (NLP) area triggered many researchers to use the Transformer approach in solving semantic segmentation problems, so the Vision Transformer (ViT) was born. ViT accepts images as patches to produce local and global attention, unlike the convolution approach. One ViT-inspired model, SegFormer, combines a Hierarchical Transformer encoder component to generate low-resolution fine features that focus on capturing small detailed, fine-grained information such as edges, corners, and local patterns, and high-resolution coarse features that focus on capturing more general and global characteristics of a scene, and a lightweight All-MLP decoder to combine those multi-level features to produce the final semantic segmentation mask. This allows SegFormer to capture local and global contextual information from an image. This paper proposed the SegFormer model to perform semantic segmentation of bone scan images into 12 classes based on bone regions. As a result, by comparing FCN and DeepLabv3+ convolution approaches, SegFormer outperforms both models with the highest mIoU value of 77.86%.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiomics and Machine Learning in Medical ImagingArtificial Intelligence in Healthcare and EducationAI in cancer detection
Volltext beim Verlag öffnen