OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.04.2026, 15:00

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Abstract 1251: Real-world evaluation of multimodal AI: Foundation model-driven multimodal AI for GBM, NSCLC, and PDAC.

2026·0 Zitationen·Cancer Research
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2026

Jahr

Abstract

Abstract Purpose: Translating multimodal AI from curated research datasets to real-world clinical practice remains a critical challenge in precision oncology. In this study, we adapted HONeYBEE, a foundation model-driven multimodal AI platform, for real-world oncology workflows. We focused on three cancers, glioblastoma (GBM), non-small cell lung cancer (NSCLC), and pancreatic ductal adenocarcinoma (PDAC), using routine clinical documentation, radiology/ pathology reports, and imaging studies to improve survival prediction and cohort stratification. Methods: We curated 3 cohorts (GBM n=160, NSCLC n=580, PDAC n=171), spanning 911 patients from single NCI-designated Cancer Center. The framework processed multimodal embeddings generated via HONeYBEE. Unlike curated research datasets, these cohorts had incomplete available data (8.2-47% missing), heterogeneous documentation and imaging protocols. We employed cross-modal attention mechanisms to dynamically learn hierarchical relationships between modalities while incorporating 99.96% dimensionality reduction. Cross-validation was used to evaluate concordance index (C-index), risk stratification for survival outcomes, and three attribution methods that quantify per-modality contributions. Results: The framework achieved C-indices of 0.637±0.087 for GBM, 0.598±0.021 for NSCLC, and 0.679±0.029 for PDAC, demonstrating consistent performance across cancer types despite substantial missing data. Risk stratification for survival outcomes identified clinically meaningful groups with four-fold (GBM: low 28 months vs. high-risk 6 months), five-fold (NSCLC: low 60 months vs. high-risk 12 months), and three-fold (PDAC: low 100 months vs. high-risk 35 months) differences in median survival. Attribution analysis revealed disease-specific patterns reflecting clinical reality. Text reports dominated GBM predictions (43.7%), capturing critical clinical information, imaging data drove NSCLC predictions (49%), reflecting central role of CT in staging, and balanced contributions characterized PDAC (31-35% per modality), aligning with guidelines emphasizing comprehensive assessment. Patient-level attribution demonstrated that high-risk individuals relied heavily on adverse imaging features, while low-risk patients showed balanced modality contributions, providing actionable insights for clinical review. Conclusions: This work successfully extends research from datasets to real-world clinical environments, demonstrating practical utility for treatment stratification and prognostic assessment across three challenging malignancies. Framework's modular architecture enables seamless integration with existing systems. By generating standardized patient embeddings with incomplete and heterogeneous data, we provide a scalable infrastructure for deploying multimodal AI in routine oncology care. Citation Format: Aakash Gireesh Tripathi, Asim Waqas, Evan W. Davis, Jennifer B. Permuth, Jack Farinhas, Yasin Yilmaz, Matthew B. Schabath, Ghulam Rasool. Real-world evaluation of multimodal AI: Foundation model-driven multimodal AI for GBM, NSCLC, and PDAC [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2026; Part 1 (Regular Abstracts); 2026 Apr 17-22; San Diego, CA. Philadelphia (PA): AACR; Cancer Res 2026;86(7 Suppl):Abstract nr 1251.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiomics and Machine Learning in Medical ImagingMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen