OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.04.2026, 02:46

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Beyond SMOTE: Evaluating large language models and mixture of experts for prediction of surgical site infections

2025·0 Zitationen·Artificial Intelligence in HealthOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Handling severe class imbalance remains one of the most persistent barriers to deploying reliable artificial intelligence (AI) in healthcare. Conventional approaches such as SMOTE and other resampling strategies often inflate training performance but degrade under real-world distribution shift. We evaluated alternative modeling strategies, including classical machine learning, ensemble methods, imbalance-aware mixture of experts (MoE), and a fine-tuned large language model (LLM) (ModernBERT-large), for surgical site infection (SSI) prediction using structured electronic medical records. Across temporally shifted evaluation cohorts, the ModernBERT model consistently outperformed all baselines without synthetic oversampling or target ratio adjustments, achieving a Matthew correlation coefficient of 0.71 versus 0.35 for the best SMOTE-resampled CatBoost model. In contrast, MoE architectures failed to deliver robustness gains, and resampled classical models deteriorated under distributional change. These results highlight a paradigm shift: pre-trained language models can serve as deployment-stable alternatives to synthetic imbalance correction in structured clinical prediction tasks. Beyond SSI, this finding underscores the potential of LLMs to improve the resilience of healthcare AI systems where minority-class prediction is critical.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareImbalanced Data Classification Techniques
Volltext beim Verlag öffnen