Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Toward Detecting Unnecessary Radiology Tests: Identifying Positive Findings through Text Classification of Chinese Radiology Reports (Preprint)
0
Zitationen
10
Autoren
2019
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> Background: Identifying potentially unnecessary radiology tests in medical practice is an important but difficult task for the sake of quality and cost control. In China, payers and regulators sometimes rely on one indicator, the proportion of tests that identify positive findings, to detect clues that may indicate unnecessary tests. This paper aims to develop a tool based on deep neural networks to automatically identify positive findings from Chinese radiology reports. We use the tool to calculate positive rate within sets of ultrasound tests administered by clinicians and explore the potential of exploiting the tool to collect evidence toward identifying unnecessary radiology tests. Methods: Our proposed method is based on a supervised learning framework and trained on manually annotated ultrasound reports from one general hospital and one children’s hospital in China. Convolutional neural networks (CNN), support vector machine (SVM) and rule-based patterns were leveraged to classify the reports as either positive or negative. Then we applied the best classifier to a data set which consists of all ultrasound reports in one year for subsequent analyses, stratified by types of ultrasound. Results: The performance of CNN (F-score 0.989) outperforms SVM, rule-based methods, as well as single-human annotation. Cross-hospital experiments were also conducted to demonstrate the generalizability of the methods. When applied to an un-annotated dataset, CNN shows significant variance of the positive rate for different types of ultrasound reports. Conclusions: Machine learning methods are effective for automatically identifying positive findings from radiology reports, which can facilitate the process of evidence collection for detecting unnecessary radiology tests. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.