Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
Temeljni modeli strojnoga učenja za prijenos znanja obučavani na skupu medicinskih podataka RadiologyNET : doktorski rad
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The adoption of deep learning techniques in medical imaging has the potential to improve diagnostic accuracy and speed up clinical decision-making. However, the development of such techniques is slowed down by the scarcity of annotated datasets, as manual labelling of medical data is time-consuming, costly, and expert-dependent. For this reason, transfer learning has been widely adopted as a solution: a model is first pretrained on a large dataset, and then fine-tuned on downstream tasks (which are often data-scarce). However, publicly available large-scale medical datasets often focus narrowly on specific imaging modalities or anatomical regions (e.g. chest X-rays), thereby restricting their usefulness in constructing general-purpose models for transfer learning. The reliance on natural image datasets (e.g. ImageNet) for pretraining has shown mixed results in medical transfer learning, which underscores the need for large and diverse meaningful medical datasets that can be used in the development of pretrained models. This thesis addresses the lack of domain-relevant annotated data by introducing an unsupervised framework for labelling medical imaging datasets using a combination of Digital Imaging and Communications in Medicine (DICOM) images, structured metadata, and narrative diagnoses. The pipeline was applied to a large-scale multimodal medical dataset, RadiologyNET, with feature extraction and clustering techniques used to group images into semantically meaningful categories without relying on manual annotation. These pseudo-labels were then used to pretrain several widely used convolutional neural network architectures, including ResNet, EfficientNet, DenseNet, MobileNet, Inception and VGG. The pretrained models were evaluated on a wide range of downstream tasks (classification, regression, and segmentation) across multiple publicly available medical imaging datasets. Comparative analyses were conducted against both ImageNet-pretrained models and models trained from randomly initialised weights. The findings show that RadiologyNET-pretrained models are effective when training resources are limited (i.e. reduced training data and training time), however, they did not consistently outperform ImageNet in normal training conditions. ImageNet-pretrained models achieve strong performance when fine-tuned, but the overall benefits of transfer learning (regardless of source) decrease as the amount of available training data increases, confirming that the impact of pretraining becomes less prominent in problems with sufficient data.
Ähnliche Arbeiten
Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study
2020 · 22.630 Zit.
La certeza de lo impredecible: Cultura Educación y Sociedad en tiempos de COVID19
2020 · 19.284 Zit.
A Multi-Modal Distributed Real-Time IoT System for Urban Traffic Control (Invited Paper)
2024 · 14.276 Zit.
UNet++: A Nested U-Net Architecture for Medical Image Segmentation
2018 · 8.608 Zit.
Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
2021 · 7.223 Zit.