Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Validation of natural language processing to extract breast cancer pathology procedures and results
40
Zitationen
7
Autoren
2015
Jahr
Abstract
BACKGROUND: Pathology reports typically require manual review to abstract research data. We developed a natural language processing (NLP) system to automatically interpret free-text breast pathology reports with limited assistance from manual abstraction. METHODS: We used an iterative approach of machine learning algorithms and constructed groups of related findings to identify breast-related procedures and results from free-text pathology reports. We evaluated the NLP system using an all-or-nothing approach to determine which reports could be processed entirely using NLP and which reports needed manual review beyond NLP. We divided 3234 reports for development (2910, 90%), and evaluation (324, 10%) purposes using manually reviewed pathology data as our gold standard. RESULTS: NLP correctly coded 12.7% of the evaluation set, flagged 49.1% of reports for manual review, incorrectly coded 30.8%, and correctly omitted 7.4% from the evaluation set due to irrelevancy (i.e. not breast-related). Common procedures and results were identified correctly (e.g. invasive ductal with 95.5% precision and 94.0% sensitivity), but entire reports were flagged for manual review because of rare findings and substantial variation in pathology report text. CONCLUSIONS: The NLP system we developed did not perform sufficiently for abstracting entire breast pathology reports. The all-or-nothing approach resulted in too broad of a scope of work and limited our flexibility to identify breast pathology procedures and results. Our NLP system was also limited by the lack of the gold standard data on rare findings and wide variation in pathology text. Focusing on individual, common elements and improving pathology text report standardization may improve performance.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.880 Zit.
pROC: an open-source package for R and S+ to analyze and compare ROC curves
2011 · 13.750 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.439 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 12.033 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.379 Zit.