OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 18:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Fairness Evaluation of Risk Estimation Models for Lung Cancer Screening

2025·0 Zitationen·The Journal of Machine Learning for Biomedical ImagingOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

Lung cancer is the leading cause of cancer-related mortality in adults worldwide. Screening high-risk individuals with annual low-dose CT (LDCT) can support earlier detection and reduce deaths, but widespread implementation may strain the already limited radiology workforce. Artificial intelligence (AI) models have shown potential in estimating lung cancer risk from LDCT scans. However, high-risk populations for lung cancer are diverse, and these models’ performance across diverse demographic groups remains an open question. In this study, we used the JustEFAB ethical framework to evaluate potential performance disparities and fairness in two AI-based risk estimation models for lung cancer screening: the Sybil lung cancer risk model and the Venkadesh21 nodule risk estimator. We also examined disparities in the PanCan2b logistic regression model recommended in the British Thoracic Society nodule management guideline. Both AI-based models were trained on data from the U.S.-based National Lung Screening Trial (NLST), and assessed on a held-out NLST validation set. We evaluated area under the ROC curve (AUROC), sensitivity, and specificity across demographic subgroups, and explored potential confounding from clinical risk factors. We observed a statistically significant AUROC difference in Sybil’s performance between women (0.88, 95% CI: 0.86, 0.90) and men (0.81, 95% CI: 0.78, 0.84, p < .001). At 90% specificity, Venkadesh21 showed lower sensitivity for Black (0.39, 95% CI: 0.23, 0.59) than White participants (0.69, 95% CI: 0.65, 0.73). These differences were not explained by available clinical confounders and may be classified as unfair biases according to JustEFAB. Our findings highlight the importance of improving and monitoring model performance across underrepresented subgroups in lung cancer screening, as well as further research on algorithmic fairness in this field.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Lung Cancer Diagnosis and TreatmentRadiomics and Machine Learning in Medical ImagingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen