Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparison of active learning algorithms in classifying head computed tomography reports using bidirectional encoder representations from transformers
0
Zitationen
13
Autoren
2025
Jahr
Abstract
PURPOSE: Systems equipped with natural language (NLP) processing can reduce missed radiological findings by physicians, but the annotation costs are burden in the development. This study aimed to compare the effects of active learning (AL) algorithms in NLP for estimating the significance of head computed tomography (CT) reports using bidirectional encoder representations from transformers (BERT). METHODS: A total of 3728 head CT reports annotated with five categories of importance were used and UTH-BERT was adopted as the pre-trained BERT model. We assumed that 64% (2385 reports) of the data were initially in the unlabeled data pool (UDP), while the labeled data set (LD) used to train the model was empty. Twenty-five reports were repeatedly selected from the UDP and added to the LD, based on seven metrices: random sampling (RS: control), four uncertainty sampling (US) methods (least confidence (LC), margin sampling (MS), ratio of confidence (RC), and entropy sampling (ES)), and two distance-based sampling (DS) methods (cosine distance (CD) and Euclidian distance (ED)). The transition of accuracy of the model was evaluated using the test dataset. RESULTS: The accuracy of the models with US was significantly higher than RS when reports in LD were < 1800, whereas DS methods were significantly lower than RS. Among the US methods, MS and RC were even better than the others. With the US methods, the required labeled data decreased by 15.4-40.5%, and most efficient in RC. In addition, in the US methods, data for minor categories tended to be added to LD earlier than RS and DS. CONCLUSIONS: In the classification task for the importance of head CT reports, US methods, especially RC and MS can lead to the effective fine-tuning of BERT models and reduce the imbalance of categories. AL can contribute to other studies on larger datasets by providing effective annotation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.