Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
MoME: Mixture of Visual Language Medical Experts for Medical Imaging Segmentation
0
Zitationen
6
Autoren
2025
Jahr
Abstract
In this study, we propose MoME, a Mixture of Visual Language Medical Experts, for Medical Image Segmentation. MoME adapts the successful Mixture of Experts (MoE) paradigm, widely used in Large Language Models (LLMs), for medical vision-language tasks. The architecture enables dynamic expert selection by effectively utilizing multi-scale visual features tailored to the intricacies of medical imagery, enriched with textual embeddings. This work explores a novel integration of vision-language models for this domain. Utilizing an assembly of 10 datasets, encompassing 3,410 CT scans, MoME demonstrates strong performance on a comprehensive medical imaging segmentation benchmark. Our approach explores the integration of foundation models for medical imaging, benefiting from the established efficacy of MoE in boosting model performance by incorporating textual information. Demonstrating competitive precision across multiple datasets, MoME explores a novel architecture for achieving robust results in medical image analysis.
Ähnliche Arbeiten
MizAR 60 for Mizar 50
2023 · 74.704 Zit.
ImageNet: A large-scale hierarchical image database
2009 · 60.750 Zit.
Microsoft COCO: Common Objects in Context
2014 · 41.351 Zit.
Fully convolutional networks for semantic segmentation
2015 · 36.454 Zit.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.562 Zit.