Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Harnessing explainable artificial intelligence for patient-to-clinical-trial matching: A proof-of-concept pilot study using phase I oncology trials
9
Zitationen
6
Autoren
2024
Jahr
Abstract
This study aims to develop explainable AI methods for matching patients with phase 1 oncology clinical trials using Natural Language Processing (NLP) techniques to address challenges in patient recruitment for improved efficiency in drug development. A prototype system based on modern NLP techniques has been developed to match patient records with phase 1 oncology clinical trial protocols. Four criteria are considered for the matching: cancer type, performance status, genetic mutation, and measurable disease. The system outputs a summary matching score along with explanations of the evidence. The outputs of the AI system were evaluated against the ground truth matching results provided by the domain expert on a dataset of twelve synthesized dummy patient records and six clinical trial protocols. The system achieved a precision of 73.68%, sensitivity/recall of 56%, accuracy of 77.78%, and specificity of 89.36%. Further investigation into the misclassified cases indicated that ambiguity of abbreviation and misunderstanding of context are significant contributors to errors. The system found evidence of no matching for all false positive cases. To the best of our knowledge, no system in the public domain currently deploys an explainable AI-based approach to identify optimal patients for phase 1 oncology trials. This initial attempt to develop an AI system for patients and clinical trial matching in the context of phase 1 oncology trials showed promising results that are set to increase efficiency without sacrificing quality in patient-trial matching.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.