OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 03:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial intelligence (AI)–assisted eligibility screening for prostate cancer clinical trial matching.

2026·0 Zitationen·Journal of Clinical Oncology
Volltext beim Verlag öffnen

0

Zitationen

13

Autoren

2026

Jahr

Abstract

405 Background: Clinical trials are critical to advancing prostate cancer treatment, yet patient enrollment remains a major bottleneck. Despite strong interest in AI for trial matching, real-world use remains limited due to reasons such as complex infrastructure requirements. To address this, we developed a lightweight, scalable AI framework leveraging the Google Healthcare Search API—requiring minimal technical expertise—to automate trial eligibility screening. Methods: We used a pre-trained GPT-5 large language model (LLM) to decompose the eligibility criteria of 13 prostate cancer clinical trials into 269 YES/NO questions, such that for an eligible patient of a given trial, the chart review would yield YES for inclusion and NO for exclusion questions. From 348 patients enrolled in one of the trials, we built a dataset linking each eligibility question to each patient, resulting in 4,947 patient–question pairs (2,034 inclusion pairs with YES labels and 2,913 exclusion pairs with NO labels). The task was framed as binary classification. During evaluation, for each patient–question pair, the LLM extracted search phrases from the question to retrieve relevant EHR information via the Google Healthcare Search API, then used the question and retrieved information to predict YES or NO responses. Performance was measured using accuracy, precision, recall, specificity, and F1 score. Results: Across 4,947 question–patient pairs, our framework achieved 0.81 accuracy, 0.85 precision, 0.65 recall, 0.92 specificity and a F1 score of 0.74. Results for different phases of trials is reported in Table 1, which demonstrates the approach’s robustness across trial phases. Predictions for exclusion criteria were highly precise, while inclusion criteria were more challenging. Most errors were observed in criteria involving complex nested logical conditions, interpretable clinical definitions and data not explicitly measurable or recorded in the EHR (e.g., planned treatments or clinician-assessed scores). Conclusions: This work demonstrates a practical AI framework to support clinical trial matching. The system can identify complex disease states (biochemical recurrence vs radiographic metastasis), castration status (sensitive, resistant) and genomics. Future work should focus on developing human-in-the-loop supervision and improving question decomposition to reduce errors in complex eligibility criteria. Screening performance of the proposed framework across different phases of trials. Trial Phase #Patient-Question Pairs #YES/NO labels Accuracy Precision Recall Specificity F1 score I, II, I/II 4,227 Yes: 1,735 No: 2,492 0.80 0.84 0.66 0.92 0.74 III 720 Yes: 299 No: 421 0.82 0.90 0.63 0.95 0.74 All 4,947 Yes: 2,034 No: 2,913 0.81 0.85 0.65 0.92 0.74 #YES/NO labels: The number of patient-question pairs that are labeled YES/NO among total patient-question pairs of the same row.

Ähnliche Arbeiten