Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Deep learning approaches to automatic radiology report generation: A systematic review
29
Zitationen
3
Autoren
2023
Jahr
Abstract
A radiology report communicates the imaging findings to the referring clinicians. The rising number of referrals has created a bottleneck in healthcare. Writing a report takes disproportionally more time than the imaging itself. Therefore, Automatic Radiology Report Generation (ARRG) has a great potential to unclog this bottleneck. This study aims to provide a systematic review of Deep Learning (DL) approaches to ARRG. Specifically, it aims to answer the following research questions. What data have been used to train and evaluate DL approaches to ARRG? How are DL approaches to ARRG evaluated? How is DL used to generate the reports from radiology images? We followed the PRISMA guidelines. We retrieved 1443 records from PubMed and Web of Science on November 3, 2021. Relevant studies were categorized and compared from multiple perspectives. The corresponding findings were reported narratively. A total of 41 studies were included. We identified 14 radiology datasets. In terms of evaluation, we identified four commonly used natural language generation metrics, six clinical efficacy metrics, and other qualitative methods. We compared DL approaches with respect to the underlying neural network architecture, the method of text generation, problem representation, training strategy, interpretability, and intermediate processing. Data imbalance (normal versus abnormal cases) and the inner complexity of reports pose major difficulties in ARRG. More appropriate evaluation metrics are required as well as datasets on a much larger scale. Leveraging structured representation of radiology reports and pre-trained language models warrant further research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.